Nov 5 15:50:16.284923 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:50:16.284993 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:16.285008 kernel: BIOS-provided physical RAM map: Nov 5 15:50:16.285015 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 15:50:16.285022 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 15:50:16.285029 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:50:16.285038 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 5 15:50:16.285050 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 5 15:50:16.285057 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:50:16.285066 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:50:16.285073 kernel: NX (Execute Disable) protection: active Nov 5 15:50:16.285080 kernel: APIC: Static calls initialized Nov 5 15:50:16.285088 kernel: SMBIOS 2.8 present. Nov 5 15:50:16.285095 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 5 15:50:16.285104 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:50:16.285114 kernel: Hypervisor detected: KVM Nov 5 15:50:16.285126 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:50:16.285134 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:50:16.285141 kernel: kvm-clock: using sched offset of 4356245810 cycles Nov 5 15:50:16.285150 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:50:16.285159 kernel: tsc: Detected 1995.312 MHz processor Nov 5 15:50:16.285167 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:50:16.285176 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:50:16.285187 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 5 15:50:16.285195 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:50:16.285203 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:50:16.285212 kernel: ACPI: Early table checksum verification disabled Nov 5 15:50:16.285220 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 5 15:50:16.285228 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285236 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285246 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285254 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 5 15:50:16.285262 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285270 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285279 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285286 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:16.285295 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 5 15:50:16.285305 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 5 15:50:16.285313 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 5 15:50:16.285321 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 5 15:50:16.285333 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 5 15:50:16.285341 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 5 15:50:16.285352 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 5 15:50:16.285361 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 5 15:50:16.285369 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 5 15:50:16.285378 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00001000-0x7ffdafff] Nov 5 15:50:16.285386 kernel: NODE_DATA(0) allocated [mem 0x7ffd3dc0-0x7ffdafff] Nov 5 15:50:16.285395 kernel: Zone ranges: Nov 5 15:50:16.285405 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:50:16.285414 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 5 15:50:16.285423 kernel: Normal empty Nov 5 15:50:16.285431 kernel: Device empty Nov 5 15:50:16.285439 kernel: Movable zone start for each node Nov 5 15:50:16.285448 kernel: Early memory node ranges Nov 5 15:50:16.285456 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:50:16.285464 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 5 15:50:16.285491 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 5 15:50:16.285499 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:50:16.285508 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:50:16.285517 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 5 15:50:16.285525 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:50:16.285537 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:50:16.285546 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:50:16.285560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:50:16.285569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:50:16.285578 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:50:16.285589 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:50:16.285598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:50:16.285606 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:50:16.285615 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:50:16.285623 kernel: TSC deadline timer available Nov 5 15:50:16.285634 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:50:16.285643 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:50:16.285651 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:50:16.285660 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:50:16.285668 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:50:16.285676 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:50:16.285685 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:50:16.285695 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:50:16.285704 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 5 15:50:16.285713 kernel: Booting paravirtualized kernel on KVM Nov 5 15:50:16.285752 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:50:16.285765 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:50:16.285780 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:50:16.285795 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:50:16.285813 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:50:16.285827 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 5 15:50:16.285843 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:16.285858 kernel: random: crng init done Nov 5 15:50:16.285872 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:50:16.285885 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:50:16.285899 kernel: Fallback order for Node 0: 0 Nov 5 15:50:16.285915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524153 Nov 5 15:50:16.285929 kernel: Policy zone: DMA32 Nov 5 15:50:16.285944 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:50:16.285957 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:50:16.285972 kernel: Kernel/User page tables isolation: enabled Nov 5 15:50:16.285986 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:50:16.286001 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:50:16.286019 kernel: Dynamic Preempt: voluntary Nov 5 15:50:16.286033 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:50:16.286048 kernel: rcu: RCU event tracing is enabled. Nov 5 15:50:16.286062 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:50:16.286075 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:50:16.286089 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:50:16.286102 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:50:16.286116 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:50:16.286134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:50:16.286150 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:16.286172 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:16.286187 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:16.286202 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:50:16.286217 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:50:16.286232 kernel: Console: colour VGA+ 80x25 Nov 5 15:50:16.286256 kernel: printk: legacy console [tty0] enabled Nov 5 15:50:16.286276 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:50:16.286293 kernel: ACPI: Core revision 20240827 Nov 5 15:50:16.286309 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:50:16.286336 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:50:16.286353 kernel: x2apic enabled Nov 5 15:50:16.286369 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:50:16.286384 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:50:16.286402 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 5 15:50:16.286427 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Nov 5 15:50:16.286443 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 5 15:50:16.286464 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 5 15:50:16.289090 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:50:16.289109 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:50:16.289119 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:50:16.289129 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 5 15:50:16.289139 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:50:16.289148 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:50:16.289158 kernel: MDS: Mitigation: Clear CPU buffers Nov 5 15:50:16.289171 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 5 15:50:16.289183 kernel: active return thunk: its_return_thunk Nov 5 15:50:16.289192 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 5 15:50:16.289201 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:50:16.289211 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:50:16.289220 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:50:16.289230 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:50:16.289239 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 5 15:50:16.289251 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:50:16.289260 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:50:16.289269 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:50:16.289278 kernel: landlock: Up and running. Nov 5 15:50:16.289288 kernel: SELinux: Initializing. Nov 5 15:50:16.289297 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:50:16.289307 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:50:16.289318 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 5 15:50:16.289327 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 5 15:50:16.289336 kernel: signal: max sigframe size: 1776 Nov 5 15:50:16.289346 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:50:16.289356 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:50:16.289366 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:50:16.289375 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 5 15:50:16.289387 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:50:16.289402 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:50:16.289411 kernel: .... node #0, CPUs: #1 Nov 5 15:50:16.289421 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:50:16.289430 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Nov 5 15:50:16.289440 kernel: Memory: 1989436K/2096612K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102612K reserved, 0K cma-reserved) Nov 5 15:50:16.289450 kernel: devtmpfs: initialized Nov 5 15:50:16.289461 kernel: x86/mm: Memory block size: 128MB Nov 5 15:50:16.289486 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:50:16.289495 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:50:16.289504 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:50:16.289514 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:50:16.289523 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:50:16.289532 kernel: audit: type=2000 audit(1762357813.522:1): state=initialized audit_enabled=0 res=1 Nov 5 15:50:16.289544 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:50:16.289553 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:50:16.289562 kernel: cpuidle: using governor menu Nov 5 15:50:16.289571 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:50:16.289581 kernel: dca service started, version 1.12.1 Nov 5 15:50:16.289590 kernel: PCI: Using configuration type 1 for base access Nov 5 15:50:16.289600 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:50:16.289611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:50:16.289625 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:50:16.290513 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:50:16.290537 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:50:16.290548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:50:16.290559 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:50:16.290569 kernel: ACPI: Interpreter enabled Nov 5 15:50:16.290584 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:50:16.290595 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:50:16.290605 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:50:16.290616 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:50:16.290627 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 5 15:50:16.290637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:50:16.290950 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:50:16.291135 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 5 15:50:16.291302 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 5 15:50:16.291317 kernel: acpiphp: Slot [3] registered Nov 5 15:50:16.291327 kernel: acpiphp: Slot [4] registered Nov 5 15:50:16.291336 kernel: acpiphp: Slot [5] registered Nov 5 15:50:16.291345 kernel: acpiphp: Slot [6] registered Nov 5 15:50:16.291358 kernel: acpiphp: Slot [7] registered Nov 5 15:50:16.291367 kernel: acpiphp: Slot [8] registered Nov 5 15:50:16.291376 kernel: acpiphp: Slot [9] registered Nov 5 15:50:16.291385 kernel: acpiphp: Slot [10] registered Nov 5 15:50:16.291395 kernel: acpiphp: Slot [11] registered Nov 5 15:50:16.291405 kernel: acpiphp: Slot [12] registered Nov 5 15:50:16.291414 kernel: acpiphp: Slot [13] registered Nov 5 15:50:16.291423 kernel: acpiphp: Slot [14] registered Nov 5 15:50:16.291434 kernel: acpiphp: Slot [15] registered Nov 5 15:50:16.291443 kernel: acpiphp: Slot [16] registered Nov 5 15:50:16.291453 kernel: acpiphp: Slot [17] registered Nov 5 15:50:16.291463 kernel: acpiphp: Slot [18] registered Nov 5 15:50:16.292701 kernel: acpiphp: Slot [19] registered Nov 5 15:50:16.292717 kernel: acpiphp: Slot [20] registered Nov 5 15:50:16.292727 kernel: acpiphp: Slot [21] registered Nov 5 15:50:16.292741 kernel: acpiphp: Slot [22] registered Nov 5 15:50:16.292750 kernel: acpiphp: Slot [23] registered Nov 5 15:50:16.292759 kernel: acpiphp: Slot [24] registered Nov 5 15:50:16.292768 kernel: acpiphp: Slot [25] registered Nov 5 15:50:16.292778 kernel: acpiphp: Slot [26] registered Nov 5 15:50:16.292787 kernel: acpiphp: Slot [27] registered Nov 5 15:50:16.292796 kernel: acpiphp: Slot [28] registered Nov 5 15:50:16.292807 kernel: acpiphp: Slot [29] registered Nov 5 15:50:16.292816 kernel: acpiphp: Slot [30] registered Nov 5 15:50:16.292825 kernel: acpiphp: Slot [31] registered Nov 5 15:50:16.292835 kernel: PCI host bridge to bus 0000:00 Nov 5 15:50:16.296550 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:50:16.296823 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:50:16.296955 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:50:16.297090 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 5 15:50:16.297211 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 5 15:50:16.297360 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:50:16.297571 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:50:16.297771 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:50:16.297933 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint Nov 5 15:50:16.298071 kernel: pci 0000:00:01.1: BAR 4 [io 0xc1e0-0xc1ef] Nov 5 15:50:16.298287 kernel: pci 0000:00:01.1: BAR 0 [io 0x01f0-0x01f7]: legacy IDE quirk Nov 5 15:50:16.304541 kernel: pci 0000:00:01.1: BAR 1 [io 0x03f6]: legacy IDE quirk Nov 5 15:50:16.304771 kernel: pci 0000:00:01.1: BAR 2 [io 0x0170-0x0177]: legacy IDE quirk Nov 5 15:50:16.304908 kernel: pci 0000:00:01.1: BAR 3 [io 0x0376]: legacy IDE quirk Nov 5 15:50:16.305065 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint Nov 5 15:50:16.305201 kernel: pci 0000:00:01.2: BAR 4 [io 0xc180-0xc19f] Nov 5 15:50:16.305371 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Nov 5 15:50:16.305544 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 5 15:50:16.305746 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 5 15:50:16.305919 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:50:16.306107 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref] Nov 5 15:50:16.306281 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref] Nov 5 15:50:16.306941 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfebf0000-0xfebf0fff] Nov 5 15:50:16.307090 kernel: pci 0000:00:02.0: ROM [mem 0xfebe0000-0xfebeffff pref] Nov 5 15:50:16.307226 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:50:16.307522 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:50:16.307662 kernel: pci 0000:00:03.0: BAR 0 [io 0xc1a0-0xc1bf] Nov 5 15:50:16.307804 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebf1000-0xfebf1fff] Nov 5 15:50:16.307937 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref] Nov 5 15:50:16.308082 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:50:16.308221 kernel: pci 0000:00:04.0: BAR 0 [io 0xc1c0-0xc1df] Nov 5 15:50:16.308352 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebf2000-0xfebf2fff] Nov 5 15:50:16.308558 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 5 15:50:16.308711 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:50:16.308844 kernel: pci 0000:00:05.0: BAR 0 [io 0xc100-0xc13f] Nov 5 15:50:16.308976 kernel: pci 0000:00:05.0: BAR 1 [mem 0xfebf3000-0xfebf3fff] Nov 5 15:50:16.309115 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 5 15:50:16.309263 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:50:16.309398 kernel: pci 0000:00:06.0: BAR 0 [io 0xc000-0xc07f] Nov 5 15:50:16.309561 kernel: pci 0000:00:06.0: BAR 1 [mem 0xfebf4000-0xfebf4fff] Nov 5 15:50:16.309774 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref] Nov 5 15:50:16.309946 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:50:16.310088 kernel: pci 0000:00:07.0: BAR 0 [io 0xc080-0xc0ff] Nov 5 15:50:16.310218 kernel: pci 0000:00:07.0: BAR 1 [mem 0xfebf5000-0xfebf5fff] Nov 5 15:50:16.310348 kernel: pci 0000:00:07.0: BAR 4 [mem 0xfe814000-0xfe817fff 64bit pref] Nov 5 15:50:16.311137 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:50:16.311342 kernel: pci 0000:00:08.0: BAR 0 [io 0xc140-0xc17f] Nov 5 15:50:16.311800 kernel: pci 0000:00:08.0: BAR 4 [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 5 15:50:16.311818 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:50:16.311828 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:50:16.311838 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:50:16.311848 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:50:16.311857 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 5 15:50:16.311871 kernel: iommu: Default domain type: Translated Nov 5 15:50:16.311880 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:50:16.311890 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:50:16.311899 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:50:16.311909 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 15:50:16.311919 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 5 15:50:16.312138 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 5 15:50:16.312306 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 5 15:50:16.312445 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:50:16.312456 kernel: vgaarb: loaded Nov 5 15:50:16.312466 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:50:16.312989 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:50:16.313000 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:50:16.313011 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:50:16.313020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:50:16.313036 kernel: pnp: PnP ACPI init Nov 5 15:50:16.313046 kernel: pnp: PnP ACPI: found 4 devices Nov 5 15:50:16.313056 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:50:16.313066 kernel: NET: Registered PF_INET protocol family Nov 5 15:50:16.313075 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:50:16.313084 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:50:16.313093 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:50:16.313106 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:50:16.313115 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:50:16.313125 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:50:16.313134 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:50:16.313143 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:50:16.313153 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:50:16.313162 kernel: NET: Registered PF_XDP protocol family Nov 5 15:50:16.313342 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:50:16.313743 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:50:16.314016 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:50:16.314145 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 5 15:50:16.314263 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 5 15:50:16.314403 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 5 15:50:16.314568 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 5 15:50:16.314582 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 5 15:50:16.314715 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x720 took 27730 usecs Nov 5 15:50:16.314728 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:50:16.314742 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 5 15:50:16.314758 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Nov 5 15:50:16.314772 kernel: Initialise system trusted keyrings Nov 5 15:50:16.314790 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:50:16.314803 kernel: Key type asymmetric registered Nov 5 15:50:16.314817 kernel: Asymmetric key parser 'x509' registered Nov 5 15:50:16.314830 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:50:16.314843 kernel: io scheduler mq-deadline registered Nov 5 15:50:16.314856 kernel: io scheduler kyber registered Nov 5 15:50:16.314869 kernel: io scheduler bfq registered Nov 5 15:50:16.314886 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:50:16.314899 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 5 15:50:16.314911 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 5 15:50:16.314924 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 5 15:50:16.314936 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:50:16.314950 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:50:16.314963 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:50:16.314980 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:50:16.314993 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:50:16.315007 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:50:16.315240 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:50:16.315413 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:50:16.315570 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:50:14 UTC (1762357814) Nov 5 15:50:16.315744 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:50:16.315758 kernel: intel_pstate: CPU model not supported Nov 5 15:50:16.315768 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:50:16.315777 kernel: Segment Routing with IPv6 Nov 5 15:50:16.315786 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:50:16.315796 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:50:16.315805 kernel: Key type dns_resolver registered Nov 5 15:50:16.315820 kernel: IPI shorthand broadcast: enabled Nov 5 15:50:16.315836 kernel: sched_clock: Marking stable (1615003995, 277510270)->(1956251555, -63737290) Nov 5 15:50:16.315850 kernel: registered taskstats version 1 Nov 5 15:50:16.315863 kernel: Loading compiled-in X.509 certificates Nov 5 15:50:16.315872 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:50:16.315881 kernel: Demotion targets for Node 0: null Nov 5 15:50:16.315890 kernel: Key type .fscrypt registered Nov 5 15:50:16.315902 kernel: Key type fscrypt-provisioning registered Nov 5 15:50:16.315927 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:50:16.315938 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:50:16.315948 kernel: ima: No architecture policies found Nov 5 15:50:16.315958 kernel: clk: Disabling unused clocks Nov 5 15:50:16.315968 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:50:16.315977 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:50:16.315989 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:50:16.315999 kernel: Run /init as init process Nov 5 15:50:16.316008 kernel: with arguments: Nov 5 15:50:16.316018 kernel: /init Nov 5 15:50:16.316028 kernel: with environment: Nov 5 15:50:16.316037 kernel: HOME=/ Nov 5 15:50:16.316047 kernel: TERM=linux Nov 5 15:50:16.316056 kernel: SCSI subsystem initialized Nov 5 15:50:16.316068 kernel: libata version 3.00 loaded. Nov 5 15:50:16.316373 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 5 15:50:16.318796 kernel: scsi host0: ata_piix Nov 5 15:50:16.318976 kernel: scsi host1: ata_piix Nov 5 15:50:16.318990 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 lpm-pol 0 Nov 5 15:50:16.319008 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 lpm-pol 0 Nov 5 15:50:16.319018 kernel: ACPI: bus type USB registered Nov 5 15:50:16.319028 kernel: usbcore: registered new interface driver usbfs Nov 5 15:50:16.319038 kernel: usbcore: registered new interface driver hub Nov 5 15:50:16.319048 kernel: usbcore: registered new device driver usb Nov 5 15:50:16.319192 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 5 15:50:16.319359 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 5 15:50:16.319537 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 5 15:50:16.319673 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 5 15:50:16.319850 kernel: hub 1-0:1.0: USB hub found Nov 5 15:50:16.320027 kernel: hub 1-0:1.0: 2 ports detected Nov 5 15:50:16.320187 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 5 15:50:16.320320 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 5 15:50:16.320334 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:50:16.320344 kernel: GPT:16515071 != 125829119 Nov 5 15:50:16.320354 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:50:16.320367 kernel: GPT:16515071 != 125829119 Nov 5 15:50:16.320377 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:50:16.320386 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:50:16.322622 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 5 15:50:16.322830 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 5 15:50:16.323003 kernel: virtio_scsi virtio3: 2/0/0 default/read/poll queues Nov 5 15:50:16.323158 kernel: scsi host2: Virtio SCSI HBA Nov 5 15:50:16.323174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:50:16.323184 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:50:16.323195 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:50:16.323210 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:50:16.323225 kernel: raid6: avx2x4 gen() 18892 MB/s Nov 5 15:50:16.323309 kernel: raid6: avx2x2 gen() 25291 MB/s Nov 5 15:50:16.323324 kernel: raid6: avx2x1 gen() 15792 MB/s Nov 5 15:50:16.323334 kernel: raid6: using algorithm avx2x2 gen() 25291 MB/s Nov 5 15:50:16.323344 kernel: raid6: .... xor() 17529 MB/s, rmw enabled Nov 5 15:50:16.323353 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:50:16.323364 kernel: xor: automatically using best checksumming function avx Nov 5 15:50:16.323374 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:50:16.323384 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (162) Nov 5 15:50:16.323397 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:50:16.323408 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:16.323418 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:50:16.323428 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:50:16.323437 kernel: loop: module loaded Nov 5 15:50:16.323447 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:50:16.323457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:50:16.323491 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:50:16.323506 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:50:16.323517 systemd[1]: Detected virtualization kvm. Nov 5 15:50:16.323527 systemd[1]: Detected architecture x86-64. Nov 5 15:50:16.323537 systemd[1]: Running in initrd. Nov 5 15:50:16.323547 systemd[1]: No hostname configured, using default hostname. Nov 5 15:50:16.323561 systemd[1]: Hostname set to . Nov 5 15:50:16.323571 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:50:16.323581 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:50:16.323592 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:50:16.323602 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:16.323612 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:16.323626 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:50:16.323637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:50:16.323648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:50:16.323659 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:50:16.323670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:16.323680 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:16.323693 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:50:16.323703 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:50:16.323713 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:50:16.323723 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:50:16.323733 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:50:16.323744 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:50:16.323754 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:50:16.323766 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:50:16.323776 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:50:16.323786 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:16.323796 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:16.323806 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:16.323816 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:50:16.323827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:50:16.323839 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:50:16.323849 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:50:16.323859 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:50:16.323870 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:50:16.323881 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:50:16.323893 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:50:16.323906 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:50:16.323916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:16.323962 systemd-journald[298]: Collecting audit messages is disabled. Nov 5 15:50:16.323991 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:50:16.324001 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:16.324012 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:50:16.324022 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:50:16.324035 systemd-journald[298]: Journal started Nov 5 15:50:16.324058 systemd-journald[298]: Runtime Journal (/run/log/journal/c4ace5ee28bc4c699a496ba03dd09e66) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:50:16.328501 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:50:16.334296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:50:16.346434 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:50:16.358167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:50:16.457555 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:50:16.457596 kernel: Bridge firewalling registered Nov 5 15:50:16.370681 systemd-modules-load[300]: Inserted module 'br_netfilter' Nov 5 15:50:16.371746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:16.379983 systemd-tmpfiles[314]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:50:16.461318 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:16.464004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:16.470832 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:50:16.474704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:16.481793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:16.504167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:16.506327 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:50:16.509448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:50:16.514678 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:50:16.537952 dracut-cmdline[336]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:16.586906 systemd-resolved[337]: Positive Trust Anchors: Nov 5 15:50:16.586933 systemd-resolved[337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:50:16.586939 systemd-resolved[337]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:50:16.586990 systemd-resolved[337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:50:16.628109 systemd-resolved[337]: Defaulting to hostname 'linux'. Nov 5 15:50:16.629947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:50:16.630773 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:16.695519 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:50:16.715522 kernel: iscsi: registered transport (tcp) Nov 5 15:50:16.747284 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:50:16.747424 kernel: QLogic iSCSI HBA Driver Nov 5 15:50:16.781754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:50:16.814263 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:16.818222 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:50:16.879652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:50:16.883104 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:50:16.885667 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:50:16.938496 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:50:16.941758 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:16.984784 systemd-udevd[583]: Using default interface naming scheme 'v257'. Nov 5 15:50:17.003416 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:17.007889 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:50:17.042230 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:50:17.046364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:50:17.055818 dracut-pre-trigger[661]: rd.md=0: removing MD RAID activation Nov 5 15:50:17.095709 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:50:17.099055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:50:17.121718 systemd-networkd[682]: lo: Link UP Nov 5 15:50:17.121729 systemd-networkd[682]: lo: Gained carrier Nov 5 15:50:17.123238 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:50:17.125275 systemd[1]: Reached target network.target - Network. Nov 5 15:50:17.199431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:17.204812 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:50:17.332243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:50:17.347136 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:50:17.363017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:50:17.374824 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:50:17.380006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:50:17.390560 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:50:17.411860 disk-uuid[744]: Primary Header is updated. Nov 5 15:50:17.411860 disk-uuid[744]: Secondary Entries is updated. Nov 5 15:50:17.411860 disk-uuid[744]: Secondary Header is updated. Nov 5 15:50:17.438536 kernel: AES CTR mode by8 optimization enabled Nov 5 15:50:17.554439 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:50:17.556540 systemd-networkd[682]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 5 15:50:17.564730 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:50:17.559487 systemd-networkd[682]: eth0: Link UP Nov 5 15:50:17.559697 systemd-networkd[682]: eth0: Gained carrier Nov 5 15:50:17.559719 systemd-networkd[682]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/yy-digitalocean.network Nov 5 15:50:17.567212 systemd-networkd[682]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:17.567218 systemd-networkd[682]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:50:17.574895 systemd-networkd[682]: eth1: Link UP Nov 5 15:50:17.579600 systemd-networkd[682]: eth0: DHCPv4 address 143.110.239.237/20, gateway 143.110.224.1 acquired from 169.254.169.253 Nov 5 15:50:17.580029 systemd-networkd[682]: eth1: Gained carrier Nov 5 15:50:17.580053 systemd-networkd[682]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:17.591156 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:17.591359 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:17.593342 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:17.596850 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:17.598567 systemd-networkd[682]: eth1: DHCPv4 address 10.124.0.31/20 acquired from 169.254.169.253 Nov 5 15:50:17.620060 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:50:17.621144 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:50:17.626057 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:17.628123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:50:17.633552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:50:17.745148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:17.761149 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:50:18.505956 disk-uuid[746]: Warning: The kernel is still using the old partition table. Nov 5 15:50:18.505956 disk-uuid[746]: The new table will be used at the next reboot or after you Nov 5 15:50:18.505956 disk-uuid[746]: run partprobe(8) or kpartx(8) Nov 5 15:50:18.505956 disk-uuid[746]: The operation has completed successfully. Nov 5 15:50:18.513199 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:50:18.513389 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:50:18.517196 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:50:18.553853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Nov 5 15:50:18.553922 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:18.556955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:18.565293 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:50:18.565391 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:50:18.574517 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:18.576232 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:50:18.578843 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:50:18.821448 ignition[856]: Ignition 2.22.0 Nov 5 15:50:18.821490 ignition[856]: Stage: fetch-offline Nov 5 15:50:18.821555 ignition[856]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:18.823956 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:50:18.821570 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:18.821707 ignition[856]: parsed url from cmdline: "" Nov 5 15:50:18.821711 ignition[856]: no config URL provided Nov 5 15:50:18.821718 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:50:18.828733 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:50:18.821730 ignition[856]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:50:18.821738 ignition[856]: failed to fetch config: resource requires networking Nov 5 15:50:18.822215 ignition[856]: Ignition finished successfully Nov 5 15:50:18.865244 ignition[864]: Ignition 2.22.0 Nov 5 15:50:18.865265 ignition[864]: Stage: fetch Nov 5 15:50:18.865512 ignition[864]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:18.865527 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:18.865678 ignition[864]: parsed url from cmdline: "" Nov 5 15:50:18.865688 ignition[864]: no config URL provided Nov 5 15:50:18.865697 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:50:18.865709 ignition[864]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:50:18.865766 ignition[864]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 5 15:50:18.883533 ignition[864]: GET result: OK Nov 5 15:50:18.883744 ignition[864]: parsing config with SHA512: 9a7e86f1f2aa9c8cc6732b6e764a8b7456f28d9898af07679e1a3d99d63a0175329f042dfe5ea3eb503a8bf8964b4e2d5e02b8dfcddc3b6ade8da727a77f14be Nov 5 15:50:18.890735 unknown[864]: fetched base config from "system" Nov 5 15:50:18.890749 unknown[864]: fetched base config from "system" Nov 5 15:50:18.891116 ignition[864]: fetch: fetch complete Nov 5 15:50:18.890756 unknown[864]: fetched user config from "digitalocean" Nov 5 15:50:18.891123 ignition[864]: fetch: fetch passed Nov 5 15:50:18.894031 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:50:18.891185 ignition[864]: Ignition finished successfully Nov 5 15:50:18.897658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:50:18.930707 systemd-networkd[682]: eth0: Gained IPv6LL Nov 5 15:50:18.966837 ignition[871]: Ignition 2.22.0 Nov 5 15:50:18.966858 ignition[871]: Stage: kargs Nov 5 15:50:18.967091 ignition[871]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:18.967101 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:18.968633 ignition[871]: kargs: kargs passed Nov 5 15:50:18.972016 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:50:18.968702 ignition[871]: Ignition finished successfully Nov 5 15:50:18.976158 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:50:18.995695 systemd-networkd[682]: eth1: Gained IPv6LL Nov 5 15:50:19.017666 ignition[878]: Ignition 2.22.0 Nov 5 15:50:19.017682 ignition[878]: Stage: disks Nov 5 15:50:19.017857 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:19.017868 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:19.020862 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:50:19.019046 ignition[878]: disks: disks passed Nov 5 15:50:19.019137 ignition[878]: Ignition finished successfully Nov 5 15:50:19.030208 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:50:19.031835 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:50:19.033575 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:50:19.035357 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:50:19.037049 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:50:19.040669 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:50:19.083643 systemd-fsck[886]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:50:19.087975 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:50:19.091661 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:50:19.248557 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:50:19.248668 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:50:19.250255 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:50:19.253557 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:50:19.256566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:50:19.261652 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Nov 5 15:50:19.268640 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:50:19.273440 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:50:19.274892 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:50:19.284524 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (894) Nov 5 15:50:19.287357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:50:19.295895 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:19.295970 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:19.302278 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:50:19.302401 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:50:19.306246 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:50:19.315182 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:50:19.425528 coreos-metadata[896]: Nov 05 15:50:19.425 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:50:19.427679 initrd-setup-root[924]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:50:19.437256 coreos-metadata[897]: Nov 05 15:50:19.437 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:50:19.441629 coreos-metadata[896]: Nov 05 15:50:19.440 INFO Fetch successful Nov 5 15:50:19.444548 initrd-setup-root[931]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:50:19.451364 coreos-metadata[897]: Nov 05 15:50:19.449 INFO Fetch successful Nov 5 15:50:19.452966 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Nov 5 15:50:19.456616 initrd-setup-root[938]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:50:19.454287 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Nov 5 15:50:19.461442 coreos-metadata[897]: Nov 05 15:50:19.460 INFO wrote hostname ci-4487.0.1-6-a291033793 to /sysroot/etc/hostname Nov 5 15:50:19.463281 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:50:19.465974 initrd-setup-root[946]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:50:19.621759 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:50:19.626755 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:50:19.630719 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:50:19.659579 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:50:19.664272 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:19.681724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:50:19.708089 ignition[1015]: INFO : Ignition 2.22.0 Nov 5 15:50:19.708089 ignition[1015]: INFO : Stage: mount Nov 5 15:50:19.711902 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:19.711902 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:19.711902 ignition[1015]: INFO : mount: mount passed Nov 5 15:50:19.711902 ignition[1015]: INFO : Ignition finished successfully Nov 5 15:50:19.712334 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:50:19.716634 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:50:19.742004 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:50:19.768543 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Nov 5 15:50:19.772515 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:19.772649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:19.782890 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:50:19.783018 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:50:19.786272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:50:19.841518 ignition[1043]: INFO : Ignition 2.22.0 Nov 5 15:50:19.841518 ignition[1043]: INFO : Stage: files Nov 5 15:50:19.843672 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:19.843672 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:19.843672 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:50:19.846386 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:50:19.846386 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:50:19.849954 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:50:19.851088 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:50:19.852103 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:50:19.851201 unknown[1043]: wrote ssh authorized keys file for user: core Nov 5 15:50:19.853974 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:50:19.855182 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:50:19.901057 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:50:19.968984 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:50:19.968984 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:50:19.973516 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:19.983392 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:50:21.153168 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:50:21.529898 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:21.529898 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:50:21.532724 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:50:21.533973 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:50:21.533973 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:50:21.533973 ignition[1043]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:50:21.533973 ignition[1043]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:50:21.533973 ignition[1043]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:50:21.540239 ignition[1043]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:50:21.540239 ignition[1043]: INFO : files: files passed Nov 5 15:50:21.540239 ignition[1043]: INFO : Ignition finished successfully Nov 5 15:50:21.536180 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:50:21.540849 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:50:21.545977 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:50:21.569454 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:50:21.570598 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:50:21.582503 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:21.582503 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:21.585058 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:21.586682 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:50:21.588453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:50:21.590870 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:50:21.660534 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:50:21.660681 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:50:21.663116 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:50:21.664439 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:50:21.666573 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:50:21.668707 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:50:21.700312 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:50:21.703561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:50:21.730850 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:50:21.731097 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:21.732173 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:21.733863 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:50:21.735509 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:50:21.735682 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:50:21.737639 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:50:21.738594 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:50:21.740067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:50:21.741705 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:50:21.743490 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:50:21.745833 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:50:21.747521 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:50:21.749193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:50:21.750841 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:50:21.752634 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:50:21.754627 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:50:21.756224 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:50:21.756378 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:50:21.758137 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:21.759458 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:21.760825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:50:21.761206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:21.762519 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:50:21.762676 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:50:21.764730 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:50:21.764908 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:50:21.766560 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:50:21.766668 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:50:21.768335 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:50:21.768592 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:50:21.771715 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:50:21.777902 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:50:21.781293 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:50:21.781596 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:21.783196 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:50:21.783387 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:21.787728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:50:21.787880 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:50:21.797695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:50:21.798869 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:50:21.815149 ignition[1099]: INFO : Ignition 2.22.0 Nov 5 15:50:21.817578 ignition[1099]: INFO : Stage: umount Nov 5 15:50:21.817578 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:21.817578 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 5 15:50:21.817578 ignition[1099]: INFO : umount: umount passed Nov 5 15:50:21.817578 ignition[1099]: INFO : Ignition finished successfully Nov 5 15:50:21.831762 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:50:21.832624 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:50:21.832786 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:50:21.833980 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:50:21.834148 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:50:21.836137 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:50:21.836273 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:50:21.838019 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:50:21.838101 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:50:21.839873 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:50:21.839943 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:50:21.841290 systemd[1]: Stopped target network.target - Network. Nov 5 15:50:21.842684 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:50:21.842761 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:50:21.844312 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:50:21.845627 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:50:21.849652 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:21.851513 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:50:21.853122 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:50:21.854858 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:50:21.854922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:50:21.856486 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:50:21.856546 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:50:21.857988 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:50:21.858076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:50:21.860127 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:50:21.860216 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:50:21.861672 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:50:21.861770 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:50:21.863407 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:50:21.864707 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:50:21.875943 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:50:21.876076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:50:21.878838 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:50:21.878965 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:50:21.884577 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:50:21.885613 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:50:21.885695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:21.888355 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:50:21.891629 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:50:21.891767 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:50:21.892635 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:50:21.892694 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:21.893465 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:50:21.896639 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:21.900900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:21.914084 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:50:21.914299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:21.917970 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:50:21.918078 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:21.922462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:50:21.922600 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:21.923438 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:50:21.924579 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:50:21.926750 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:50:21.926851 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:50:21.928808 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:50:21.928882 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:50:21.931416 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:50:21.932170 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:50:21.932247 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:21.936082 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:50:21.936170 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:21.936944 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:50:21.936995 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:50:21.938678 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:50:21.938737 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:21.939722 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:21.939781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:21.951493 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:50:21.951672 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:50:21.972855 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:50:21.973086 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:50:21.975390 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:50:21.978748 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:50:22.021918 systemd[1]: Switching root. Nov 5 15:50:22.071030 systemd-journald[298]: Journal stopped Nov 5 15:50:23.727971 systemd-journald[298]: Received SIGTERM from PID 1 (systemd). Nov 5 15:50:23.728100 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:50:23.728134 kernel: SELinux: policy capability open_perms=1 Nov 5 15:50:23.728160 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:50:23.728180 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:50:23.728199 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:50:23.728217 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:50:23.728235 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:50:23.728260 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:50:23.728281 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:50:23.728303 kernel: audit: type=1403 audit(1762357822.370:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:50:23.728323 systemd[1]: Successfully loaded SELinux policy in 89.688ms. Nov 5 15:50:23.728368 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.324ms. Nov 5 15:50:23.728391 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:50:23.728412 systemd[1]: Detected virtualization kvm. Nov 5 15:50:23.728431 systemd[1]: Detected architecture x86-64. Nov 5 15:50:23.728455 systemd[1]: Detected first boot. Nov 5 15:50:23.732563 systemd[1]: Hostname set to . Nov 5 15:50:23.732614 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:50:23.732638 zram_generator::config[1144]: No configuration found. Nov 5 15:50:23.732666 kernel: Guest personality initialized and is inactive Nov 5 15:50:23.733142 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:50:23.733175 kernel: Initialized host personality Nov 5 15:50:23.733208 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:50:23.733230 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:50:23.735550 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:50:23.735595 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:50:23.735620 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:50:23.735648 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:50:23.735681 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:50:23.735711 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:50:23.735737 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:50:23.735757 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:50:23.735776 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:50:23.735796 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:50:23.735815 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:50:23.735839 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:23.735866 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:23.735888 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:50:23.735909 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:50:23.735937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:50:23.735964 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:50:23.735986 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:50:23.736007 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:23.736026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:23.736045 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:50:23.736063 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:50:23.736083 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:50:23.736107 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:50:23.736128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:23.736151 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:50:23.736177 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:50:23.736202 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:50:23.736226 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:50:23.736253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:50:23.736279 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:50:23.736307 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:23.736334 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:23.736360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:23.736387 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:50:23.736407 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:50:23.736425 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:50:23.736445 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:50:23.736501 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:23.736525 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:50:23.736550 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:50:23.736575 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:50:23.736598 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:50:23.736620 systemd[1]: Reached target machines.target - Containers. Nov 5 15:50:23.736640 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:50:23.736667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:23.736690 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:50:23.736710 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:50:23.736733 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:23.736756 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:50:23.736776 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:23.736799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:50:23.736818 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:23.736838 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:50:23.736861 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:50:23.736886 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:50:23.736909 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:50:23.736929 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:50:23.736962 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:23.736984 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:50:23.737005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:50:23.737027 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:50:23.737045 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:50:23.737063 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:50:23.737083 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:50:23.737110 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:23.737133 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:50:23.737155 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:50:23.737178 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:50:23.737205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:50:23.737231 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:50:23.737251 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:50:23.737270 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:23.737294 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:50:23.737315 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:50:23.741547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:23.741641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:23.741665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:23.741689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:23.741712 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:23.741744 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:23.741767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:23.741792 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:50:23.741815 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:50:23.741837 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:50:23.741861 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:50:23.741884 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:50:23.741911 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:50:23.741935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:23.741959 kernel: fuse: init (API version 7.41) Nov 5 15:50:23.741984 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:50:23.742007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:50:23.742031 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:50:23.742058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:50:23.742085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:23.742110 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:50:23.742134 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:50:23.742158 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:50:23.742184 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:50:23.742209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:23.742294 systemd-journald[1216]: Collecting audit messages is disabled. Nov 5 15:50:23.742342 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:50:23.742364 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:50:23.742384 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:50:23.742404 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:50:23.742427 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:50:23.742450 systemd-journald[1216]: Journal started Nov 5 15:50:23.742515 systemd-journald[1216]: Runtime Journal (/run/log/journal/c4ace5ee28bc4c699a496ba03dd09e66) is 4.9M, max 39.2M, 34.3M free. Nov 5 15:50:23.161995 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:50:23.187092 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:50:23.188104 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:50:23.755564 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:50:23.762517 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:50:23.768525 kernel: loop1: detected capacity change from 0 to 8 Nov 5 15:50:23.778296 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:50:23.789271 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:50:23.816838 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 5 15:50:23.816853 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Nov 5 15:50:23.827408 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:23.829633 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:50:23.834725 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:50:23.844519 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:50:23.846318 systemd-journald[1216]: Time spent on flushing to /var/log/journal/c4ace5ee28bc4c699a496ba03dd09e66 is 56.993ms for 1000 entries. Nov 5 15:50:23.846318 systemd-journald[1216]: System Journal (/var/log/journal/c4ace5ee28bc4c699a496ba03dd09e66) is 8M, max 163.5M, 155.5M free. Nov 5 15:50:23.933694 kernel: ACPI: bus type drm_connector registered Nov 5 15:50:23.933774 systemd-journald[1216]: Received client request to flush runtime journal. Nov 5 15:50:23.933849 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:50:23.852858 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:50:23.853119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:50:23.870700 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:50:23.928633 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:23.933461 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:50:23.936505 kernel: loop4: detected capacity change from 0 to 229808 Nov 5 15:50:23.937176 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:50:23.944739 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:50:23.948846 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:50:23.970704 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:50:23.984511 kernel: loop5: detected capacity change from 0 to 8 Nov 5 15:50:23.996513 kernel: loop6: detected capacity change from 0 to 110984 Nov 5 15:50:24.002171 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 5 15:50:24.003611 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 5 15:50:24.022646 kernel: loop7: detected capacity change from 0 to 128048 Nov 5 15:50:24.021933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:24.042506 kernel: loop1: detected capacity change from 0 to 229808 Nov 5 15:50:24.060879 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-digitalocean.raw'. Nov 5 15:50:24.065998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:50:24.071353 (sd-merge)[1292]: Merged extensions into '/usr'. Nov 5 15:50:24.080703 systemd[1]: Reload requested from client PID 1247 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:50:24.080730 systemd[1]: Reloading... Nov 5 15:50:24.218587 zram_generator::config[1332]: No configuration found. Nov 5 15:50:24.246205 systemd-resolved[1287]: Positive Trust Anchors: Nov 5 15:50:24.246227 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:50:24.246232 systemd-resolved[1287]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:50:24.246268 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:50:24.275374 systemd-resolved[1287]: Using system hostname 'ci-4487.0.1-6-a291033793'. Nov 5 15:50:24.437304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:50:24.438162 systemd[1]: Reloading finished in 356 ms. Nov 5 15:50:24.452260 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:50:24.453909 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:50:24.459074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:24.461925 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:50:24.473679 systemd[1]: Starting ensure-sysext.service... Nov 5 15:50:24.477398 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:50:24.494976 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:50:24.527206 systemd[1]: Reload requested from client PID 1369 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:50:24.527234 systemd[1]: Reloading... Nov 5 15:50:24.531123 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:50:24.532622 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:50:24.533044 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:50:24.533335 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:50:24.537437 systemd-tmpfiles[1370]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:50:24.540180 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 5 15:50:24.541773 systemd-tmpfiles[1370]: ACLs are not supported, ignoring. Nov 5 15:50:24.554532 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:50:24.554715 systemd-tmpfiles[1370]: Skipping /boot Nov 5 15:50:24.572137 systemd-tmpfiles[1370]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:50:24.572363 systemd-tmpfiles[1370]: Skipping /boot Nov 5 15:50:24.664582 zram_generator::config[1404]: No configuration found. Nov 5 15:50:24.959619 systemd[1]: Reloading finished in 431 ms. Nov 5 15:50:24.974002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:50:24.993370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:25.006077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:50:25.009918 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:50:25.012737 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:50:25.018833 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:50:25.024247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:25.028175 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:50:25.035040 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.035332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:25.038560 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:25.044699 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:25.052323 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:25.053382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:25.054675 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:25.054799 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.059749 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.059949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:25.060145 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:25.060263 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:25.060371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.078580 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.078876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:25.081723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:50:25.083764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:25.083932 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:25.084078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.090712 systemd[1]: Finished ensure-sysext.service. Nov 5 15:50:25.100942 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:50:25.129935 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:50:25.157364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:25.158351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:25.160540 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:50:25.184744 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:25.187415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:25.201373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:25.202784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:25.205027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:50:25.211405 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:50:25.212608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:50:25.217590 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:50:25.255813 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Nov 5 15:50:25.278099 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:50:25.279614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:50:25.318581 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:25.319657 augenrules[1487]: No rules Nov 5 15:50:25.321645 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:50:25.321927 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:50:25.330814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:50:25.349010 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:50:25.350683 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:50:25.461604 systemd-networkd[1496]: lo: Link UP Nov 5 15:50:25.462112 systemd-networkd[1496]: lo: Gained carrier Nov 5 15:50:25.465860 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:50:25.480246 systemd[1]: Reached target network.target - Network. Nov 5 15:50:25.485054 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:50:25.492711 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:50:25.543099 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:50:25.578085 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:50:25.623329 systemd[1]: Condition check resulted in dev-disk-by\x2dlabel-config\x2d2.device - /dev/disk/by-label/config-2 being skipped. Nov 5 15:50:25.626753 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 5 15:50:25.628275 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.628431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:25.632783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:25.636987 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:25.643630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:25.645785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:25.645859 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:25.645909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:50:25.645936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:25.695669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:25.695934 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:25.704545 kernel: ISO 9660 Extensions: RRIP_1991A Nov 5 15:50:25.707152 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 5 15:50:25.738428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:25.738812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:25.740365 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:50:25.741218 systemd-networkd[1496]: eth0: Configuring with /run/systemd/network/10-7e:8e:c2:99:95:9a.network. Nov 5 15:50:25.743028 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:25.743813 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:25.747983 systemd-networkd[1496]: eth0: Link UP Nov 5 15:50:25.749401 systemd-networkd[1496]: eth0: Gained carrier Nov 5 15:50:25.751899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:50:25.758615 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:25.770251 systemd-networkd[1496]: eth1: Configuring with /run/systemd/network/10-8e:67:e6:0a:78:ef.network. Nov 5 15:50:25.771841 systemd-networkd[1496]: eth1: Link UP Nov 5 15:50:25.772673 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:25.775004 systemd-networkd[1496]: eth1: Gained carrier Nov 5 15:50:25.775261 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:25.781137 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:25.783527 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:25.795528 ldconfig[1448]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:50:25.802681 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:50:25.803627 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:50:25.808522 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:50:25.841504 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:50:25.841696 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:50:25.843092 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:50:25.844572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:50:25.845574 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:50:25.846914 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:50:25.848528 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:50:25.850247 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:50:25.852437 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:50:25.854435 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:50:25.856348 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:50:25.856440 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:50:25.858591 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:50:25.861579 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:50:25.866577 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:50:25.873256 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:50:25.875550 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:50:25.877002 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:50:25.887613 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:50:25.889990 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:50:25.892974 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:50:25.894487 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 5 15:50:25.896466 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:50:25.906193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:50:25.907247 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:50:25.909942 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:50:25.911173 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:50:25.911205 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:50:25.913944 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:50:25.917600 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:50:25.933817 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:50:25.969783 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:50:25.973267 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:50:25.979866 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:50:25.982667 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:50:25.985765 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:50:26.003642 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:50:26.011199 jq[1558]: false Nov 5 15:50:26.012788 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:50:26.024747 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:50:26.031910 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:50:26.048784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:50:26.058840 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:50:26.061275 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:50:26.061942 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:50:26.065595 coreos-metadata[1544]: Nov 05 15:50:26.065 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:50:26.068082 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:50:26.077924 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:50:26.082105 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:50:26.083408 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:50:26.083775 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:50:26.084133 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:50:26.084399 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:50:26.086877 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:50:26.087085 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:50:26.100292 coreos-metadata[1544]: Nov 05 15:50:26.099 INFO Fetch successful Nov 5 15:50:26.104952 extend-filesystems[1560]: Found /dev/vda6 Nov 5 15:50:26.112550 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing passwd entry cache Nov 5 15:50:26.109768 oslogin_cache_refresh[1561]: Refreshing passwd entry cache Nov 5 15:50:26.130117 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting users, quitting Nov 5 15:50:26.130117 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:50:26.130117 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Refreshing group entry cache Nov 5 15:50:26.130117 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Failure getting groups, quitting Nov 5 15:50:26.130117 google_oslogin_nss_cache[1561]: oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:50:26.114663 oslogin_cache_refresh[1561]: Failure getting users, quitting Nov 5 15:50:26.117731 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:50:26.114687 oslogin_cache_refresh[1561]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:50:26.118003 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:50:26.114759 oslogin_cache_refresh[1561]: Refreshing group entry cache Nov 5 15:50:26.115904 oslogin_cache_refresh[1561]: Failure getting groups, quitting Nov 5 15:50:26.115916 oslogin_cache_refresh[1561]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:50:26.141755 extend-filesystems[1560]: Found /dev/vda9 Nov 5 15:50:26.141755 extend-filesystems[1560]: Checking size of /dev/vda9 Nov 5 15:50:26.159655 jq[1579]: true Nov 5 15:50:26.182538 dbus-daemon[1545]: [system] SELinux support is enabled Nov 5 15:50:26.182885 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:50:26.189324 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:50:26.191428 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:50:26.192749 extend-filesystems[1560]: Resized partition /dev/vda9 Nov 5 15:50:26.192953 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:50:26.196027 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:50:26.196162 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 5 15:50:26.196184 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:50:26.197019 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:50:26.202727 update_engine[1575]: I20251105 15:50:26.201395 1575 main.cc:92] Flatcar Update Engine starting Nov 5 15:50:26.206226 tar[1581]: linux-amd64/LICENSE Nov 5 15:50:26.206226 tar[1581]: linux-amd64/helm Nov 5 15:50:26.218995 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:50:26.234986 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 14138363 blocks Nov 5 15:50:26.237025 update_engine[1575]: I20251105 15:50:26.236959 1575 update_check_scheduler.cc:74] Next update check in 2m21s Nov 5 15:50:26.240087 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:50:26.247831 jq[1602]: true Nov 5 15:50:26.307198 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 5 15:50:26.306371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:50:26.313200 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:50:26.319152 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:50:26.323340 extend-filesystems[1609]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:50:26.323340 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 5 15:50:26.323340 extend-filesystems[1609]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 5 15:50:26.340142 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Nov 5 15:50:26.328033 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:50:26.330755 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:50:26.390020 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:50:26.393597 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:50:26.402842 systemd[1]: Starting sshkeys.service... Nov 5 15:50:26.454220 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:50:26.479185 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:50:26.616572 systemd-logind[1574]: New seat seat0. Nov 5 15:50:26.617547 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:50:26.659299 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 5 15:50:26.692801 coreos-metadata[1639]: Nov 05 15:50:26.692 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 5 15:50:26.704504 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 5 15:50:26.706959 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:50:26.708373 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 5 15:50:26.708455 kernel: [drm] features: -context_init Nov 5 15:50:26.714506 kernel: [drm] number of scanouts: 1 Nov 5 15:50:26.714618 kernel: [drm] number of cap sets: 0 Nov 5 15:50:26.714636 coreos-metadata[1639]: Nov 05 15:50:26.712 INFO Fetch successful Nov 5 15:50:26.719515 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0 Nov 5 15:50:26.731624 unknown[1639]: wrote ssh authorized keys file for user: core Nov 5 15:50:26.762525 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 5 15:50:26.767507 kernel: Console: switching to colour frame buffer device 128x48 Nov 5 15:50:26.812217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:26.859506 sshd_keygen[1604]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:50:26.865951 locksmithd[1614]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:50:26.903600 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 5 15:50:26.932463 systemd-logind[1574]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:50:26.934874 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:50:26.949809 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:50:26.950570 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:50:26.965983 systemd[1]: Finished sshkeys.service. Nov 5 15:50:26.970455 containerd[1598]: time="2025-11-05T15:50:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:50:26.978549 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:50:26.979945 containerd[1598]: time="2025-11-05T15:50:26.979900031Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:50:27.021979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:50:27.027077 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.027881564Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.222µs" Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.027926449Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.027953667Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028137999Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028154923Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028187757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028268449Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028284148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028616328Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.028637881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.029311862Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029505 containerd[1598]: time="2025-11-05T15:50:27.029338391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029864 containerd[1598]: time="2025-11-05T15:50:27.029516946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029864 containerd[1598]: time="2025-11-05T15:50:27.029744260Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029864 containerd[1598]: time="2025-11-05T15:50:27.029782921Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:50:27.029864 containerd[1598]: time="2025-11-05T15:50:27.029794316Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:50:27.029864 containerd[1598]: time="2025-11-05T15:50:27.029828580Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:50:27.035691 containerd[1598]: time="2025-11-05T15:50:27.035639954Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:50:27.035806 containerd[1598]: time="2025-11-05T15:50:27.035769551Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:50:27.048949 containerd[1598]: time="2025-11-05T15:50:27.048887813Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:50:27.048949 containerd[1598]: time="2025-11-05T15:50:27.048956503Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.048999612Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049014539Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049029128Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049041049Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049056346Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049068967Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049081600Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049093966Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:50:27.049126 containerd[1598]: time="2025-11-05T15:50:27.049104270Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:50:27.049334 containerd[1598]: time="2025-11-05T15:50:27.049130315Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:50:27.049334 containerd[1598]: time="2025-11-05T15:50:27.049291795Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:50:27.049334 containerd[1598]: time="2025-11-05T15:50:27.049320962Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:50:27.049388 containerd[1598]: time="2025-11-05T15:50:27.049339654Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:50:27.049388 containerd[1598]: time="2025-11-05T15:50:27.049350391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:50:27.049388 containerd[1598]: time="2025-11-05T15:50:27.049361447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:50:27.049388 containerd[1598]: time="2025-11-05T15:50:27.049371546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:50:27.049388 containerd[1598]: time="2025-11-05T15:50:27.049382004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:50:27.049519 containerd[1598]: time="2025-11-05T15:50:27.049392343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:50:27.049519 containerd[1598]: time="2025-11-05T15:50:27.049407886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:50:27.049519 containerd[1598]: time="2025-11-05T15:50:27.049427608Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:50:27.049519 containerd[1598]: time="2025-11-05T15:50:27.049447450Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:50:27.049906 containerd[1598]: time="2025-11-05T15:50:27.049879051Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:50:27.049906 containerd[1598]: time="2025-11-05T15:50:27.049905383Z" level=info msg="Start snapshots syncer" Nov 5 15:50:27.051962 containerd[1598]: time="2025-11-05T15:50:27.051907765Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:50:27.052623 containerd[1598]: time="2025-11-05T15:50:27.052208707Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:50:27.052623 containerd[1598]: time="2025-11-05T15:50:27.052263092Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:50:27.052898 containerd[1598]: time="2025-11-05T15:50:27.052340559Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:50:27.063190 containerd[1598]: time="2025-11-05T15:50:27.063135023Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:50:27.063190 containerd[1598]: time="2025-11-05T15:50:27.063188947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063203117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063214685Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063272947Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063294403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063310036Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063339930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063351046Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:50:27.063362 containerd[1598]: time="2025-11-05T15:50:27.063361602Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063411967Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063432747Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063442036Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063451548Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063459674Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063506416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063533512Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063554134Z" level=info msg="runtime interface created" Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063560637Z" level=info msg="created NRI interface" Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063569553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063589146Z" level=info msg="Connect containerd service" Nov 5 15:50:27.063690 containerd[1598]: time="2025-11-05T15:50:27.063618984Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:50:27.079828 containerd[1598]: time="2025-11-05T15:50:27.079705704Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:50:27.104314 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:50:27.104648 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:50:27.112604 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:50:27.115024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:27.116357 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:27.171353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:27.186941 systemd-networkd[1496]: eth1: Gained IPv6LL Nov 5 15:50:27.198391 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:27.201448 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:50:27.204956 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:50:27.210514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:27.216155 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:50:27.239733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:50:27.249853 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:50:27.266911 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:50:27.267533 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:50:27.365868 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:50:27.397676 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:27.434640 containerd[1598]: time="2025-11-05T15:50:27.434583698Z" level=info msg="Start subscribing containerd event" Nov 5 15:50:27.434640 containerd[1598]: time="2025-11-05T15:50:27.434645006Z" level=info msg="Start recovering state" Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434760237Z" level=info msg="Start event monitor" Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434774678Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434781870Z" level=info msg="Start streaming server" Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434795662Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434804098Z" level=info msg="runtime interface starting up..." Nov 5 15:50:27.434853 containerd[1598]: time="2025-11-05T15:50:27.434810560Z" level=info msg="starting plugins..." Nov 5 15:50:27.435847 containerd[1598]: time="2025-11-05T15:50:27.434825530Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:50:27.437923 containerd[1598]: time="2025-11-05T15:50:27.437882025Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:50:27.438043 containerd[1598]: time="2025-11-05T15:50:27.437984505Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:50:27.440767 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:50:27.444693 containerd[1598]: time="2025-11-05T15:50:27.444635907Z" level=info msg="containerd successfully booted in 0.474617s" Nov 5 15:50:27.506723 systemd-networkd[1496]: eth0: Gained IPv6LL Nov 5 15:50:27.507988 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:27.628455 tar[1581]: linux-amd64/README.md Nov 5 15:50:27.652783 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:50:28.040999 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:50:28.047844 systemd[1]: Started sshd@0-143.110.239.237:22-139.178.68.195:37032.service - OpenSSH per-connection server daemon (139.178.68.195:37032). Nov 5 15:50:28.168749 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 37032 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:28.170532 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:28.181866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:50:28.192561 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:50:28.210613 systemd-logind[1574]: New session 1 of user core. Nov 5 15:50:28.237563 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:50:28.247014 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:50:28.273015 (systemd)[1726]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:50:28.277296 systemd-logind[1574]: New session c1 of user core. Nov 5 15:50:28.417413 systemd[1726]: Queued start job for default target default.target. Nov 5 15:50:28.425361 systemd[1726]: Created slice app.slice - User Application Slice. Nov 5 15:50:28.425405 systemd[1726]: Reached target paths.target - Paths. Nov 5 15:50:28.425459 systemd[1726]: Reached target timers.target - Timers. Nov 5 15:50:28.427393 systemd[1726]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:50:28.454906 systemd[1726]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:50:28.455429 systemd[1726]: Reached target sockets.target - Sockets. Nov 5 15:50:28.455668 systemd[1726]: Reached target basic.target - Basic System. Nov 5 15:50:28.455808 systemd[1726]: Reached target default.target - Main User Target. Nov 5 15:50:28.456018 systemd[1726]: Startup finished in 167ms. Nov 5 15:50:28.457057 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:50:28.469841 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:50:28.551338 systemd[1]: Started sshd@1-143.110.239.237:22-139.178.68.195:37038.service - OpenSSH per-connection server daemon (139.178.68.195:37038). Nov 5 15:50:28.669244 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 37038 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:28.671272 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:28.678827 systemd-logind[1574]: New session 2 of user core. Nov 5 15:50:28.683182 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:50:28.717762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:28.719453 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:50:28.723614 systemd[1]: Startup finished in 2.892s (kernel) + 6.487s (initrd) + 6.441s (userspace) = 15.821s. Nov 5 15:50:28.732873 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:50:28.775564 sshd[1744]: Connection closed by 139.178.68.195 port 37038 Nov 5 15:50:28.776428 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:28.789904 systemd[1]: Started sshd@2-143.110.239.237:22-139.178.68.195:37042.service - OpenSSH per-connection server daemon (139.178.68.195:37042). Nov 5 15:50:28.796426 systemd[1]: sshd@1-143.110.239.237:22-139.178.68.195:37038.service: Deactivated successfully. Nov 5 15:50:28.802977 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:50:28.807364 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:50:28.811127 systemd-logind[1574]: Removed session 2. Nov 5 15:50:28.878308 sshd[1751]: Accepted publickey for core from 139.178.68.195 port 37042 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:28.880434 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:28.889562 systemd-logind[1574]: New session 3 of user core. Nov 5 15:50:28.896413 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:50:28.961874 sshd[1761]: Connection closed by 139.178.68.195 port 37042 Nov 5 15:50:28.965226 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:28.977382 systemd[1]: sshd@2-143.110.239.237:22-139.178.68.195:37042.service: Deactivated successfully. Nov 5 15:50:28.982578 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:50:28.985309 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:50:28.993950 systemd[1]: Started sshd@3-143.110.239.237:22-139.178.68.195:37050.service - OpenSSH per-connection server daemon (139.178.68.195:37050). Nov 5 15:50:28.996752 systemd-logind[1574]: Removed session 3. Nov 5 15:50:29.086361 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 37050 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:29.089859 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:29.102781 systemd-logind[1574]: New session 4 of user core. Nov 5 15:50:29.108785 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:50:29.180189 sshd[1772]: Connection closed by 139.178.68.195 port 37050 Nov 5 15:50:29.179837 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:29.196640 systemd[1]: sshd@3-143.110.239.237:22-139.178.68.195:37050.service: Deactivated successfully. Nov 5 15:50:29.201414 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:50:29.204568 systemd-logind[1574]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:50:29.210897 systemd[1]: Started sshd@4-143.110.239.237:22-139.178.68.195:37056.service - OpenSSH per-connection server daemon (139.178.68.195:37056). Nov 5 15:50:29.213352 systemd-logind[1574]: Removed session 4. Nov 5 15:50:29.304589 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 37056 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:29.306134 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:29.316904 systemd-logind[1574]: New session 5 of user core. Nov 5 15:50:29.322386 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:50:29.403084 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:50:29.404016 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:29.428839 sudo[1783]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:29.433559 sshd[1782]: Connection closed by 139.178.68.195 port 37056 Nov 5 15:50:29.433426 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:29.449711 systemd[1]: sshd@4-143.110.239.237:22-139.178.68.195:37056.service: Deactivated successfully. Nov 5 15:50:29.454264 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:50:29.457451 systemd-logind[1574]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:50:29.461838 systemd[1]: Started sshd@5-143.110.239.237:22-139.178.68.195:37072.service - OpenSSH per-connection server daemon (139.178.68.195:37072). Nov 5 15:50:29.465637 systemd-logind[1574]: Removed session 5. Nov 5 15:50:29.543004 kubelet[1746]: E1105 15:50:29.542933 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:50:29.547502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:50:29.547729 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:50:29.548494 systemd[1]: kubelet.service: Consumed 1.556s CPU time, 267.4M memory peak. Nov 5 15:50:29.550081 sshd[1789]: Accepted publickey for core from 139.178.68.195 port 37072 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:29.552827 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:29.559867 systemd-logind[1574]: New session 6 of user core. Nov 5 15:50:29.568088 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:50:29.634922 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:50:29.636019 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:29.644444 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:29.655244 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:50:29.655725 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:29.670465 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:50:29.735989 augenrules[1817]: No rules Nov 5 15:50:29.737580 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:50:29.738017 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:50:29.740216 sudo[1794]: pam_unix(sudo:session): session closed for user root Nov 5 15:50:29.744969 sshd[1793]: Connection closed by 139.178.68.195 port 37072 Nov 5 15:50:29.745641 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Nov 5 15:50:29.758663 systemd[1]: sshd@5-143.110.239.237:22-139.178.68.195:37072.service: Deactivated successfully. Nov 5 15:50:29.761204 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:50:29.763303 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:50:29.766783 systemd[1]: Started sshd@6-143.110.239.237:22-139.178.68.195:37074.service - OpenSSH per-connection server daemon (139.178.68.195:37074). Nov 5 15:50:29.768621 systemd-logind[1574]: Removed session 6. Nov 5 15:50:29.829205 sshd[1826]: Accepted publickey for core from 139.178.68.195 port 37074 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:50:29.830809 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:50:29.838351 systemd-logind[1574]: New session 7 of user core. Nov 5 15:50:29.844881 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:50:29.906458 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:50:29.906824 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:50:30.556280 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:50:30.569198 (dockerd)[1848]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:50:31.046951 dockerd[1848]: time="2025-11-05T15:50:31.046738044Z" level=info msg="Starting up" Nov 5 15:50:31.049226 dockerd[1848]: time="2025-11-05T15:50:31.049199806Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:50:31.068332 dockerd[1848]: time="2025-11-05T15:50:31.068237478Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:50:31.097515 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3257857333-merged.mount: Deactivated successfully. Nov 5 15:50:31.267557 dockerd[1848]: time="2025-11-05T15:50:31.267154145Z" level=info msg="Loading containers: start." Nov 5 15:50:31.282603 kernel: Initializing XFRM netlink socket Nov 5 15:50:31.567597 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:31.580608 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:31.625932 systemd-networkd[1496]: docker0: Link UP Nov 5 15:50:31.626659 systemd-timesyncd[1463]: Network configuration changed, trying to establish connection. Nov 5 15:50:31.631051 dockerd[1848]: time="2025-11-05T15:50:31.630998180Z" level=info msg="Loading containers: done." Nov 5 15:50:31.655270 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1715570974-merged.mount: Deactivated successfully. Nov 5 15:50:31.656206 dockerd[1848]: time="2025-11-05T15:50:31.656143260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:50:31.656296 dockerd[1848]: time="2025-11-05T15:50:31.656271299Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:50:31.656515 dockerd[1848]: time="2025-11-05T15:50:31.656425415Z" level=info msg="Initializing buildkit" Nov 5 15:50:31.699710 dockerd[1848]: time="2025-11-05T15:50:31.699567257Z" level=info msg="Completed buildkit initialization" Nov 5 15:50:31.712233 dockerd[1848]: time="2025-11-05T15:50:31.712102501Z" level=info msg="Daemon has completed initialization" Nov 5 15:50:31.713774 dockerd[1848]: time="2025-11-05T15:50:31.712578045Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:50:31.714237 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:50:32.693731 containerd[1598]: time="2025-11-05T15:50:32.693667381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:50:33.385858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217832192.mount: Deactivated successfully. Nov 5 15:50:34.800379 containerd[1598]: time="2025-11-05T15:50:34.800284949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:34.802054 containerd[1598]: time="2025-11-05T15:50:34.801622419Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 5 15:50:34.802996 containerd[1598]: time="2025-11-05T15:50:34.802939564Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:34.805874 containerd[1598]: time="2025-11-05T15:50:34.805821784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:34.807078 containerd[1598]: time="2025-11-05T15:50:34.807029806Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.113315292s" Nov 5 15:50:34.807338 containerd[1598]: time="2025-11-05T15:50:34.807205237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:50:34.808121 containerd[1598]: time="2025-11-05T15:50:34.808072022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:50:36.541515 containerd[1598]: time="2025-11-05T15:50:36.541280903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:36.543378 containerd[1598]: time="2025-11-05T15:50:36.543295630Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 5 15:50:36.544747 containerd[1598]: time="2025-11-05T15:50:36.544640397Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:36.547311 containerd[1598]: time="2025-11-05T15:50:36.547253467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:36.548689 containerd[1598]: time="2025-11-05T15:50:36.548209534Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.740100361s" Nov 5 15:50:36.548689 containerd[1598]: time="2025-11-05T15:50:36.548250214Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:50:36.549813 containerd[1598]: time="2025-11-05T15:50:36.549776352Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:50:38.089688 containerd[1598]: time="2025-11-05T15:50:38.089594160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:38.091310 containerd[1598]: time="2025-11-05T15:50:38.091184571Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 5 15:50:38.093529 containerd[1598]: time="2025-11-05T15:50:38.093159925Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:38.097651 containerd[1598]: time="2025-11-05T15:50:38.096886475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:38.098702 containerd[1598]: time="2025-11-05T15:50:38.098643423Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.548819873s" Nov 5 15:50:38.098702 containerd[1598]: time="2025-11-05T15:50:38.098705830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:50:38.099574 containerd[1598]: time="2025-11-05T15:50:38.099537119Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:50:39.477674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3320236522.mount: Deactivated successfully. Nov 5 15:50:39.798568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:50:39.803064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:40.084704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:40.098018 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:50:40.204846 kubelet[2148]: E1105 15:50:40.204539 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:50:40.211261 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:50:40.211429 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:50:40.211875 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108M memory peak. Nov 5 15:50:40.437916 containerd[1598]: time="2025-11-05T15:50:40.437323448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:40.440100 containerd[1598]: time="2025-11-05T15:50:40.440042057Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 5 15:50:40.441609 containerd[1598]: time="2025-11-05T15:50:40.441555240Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:40.444835 containerd[1598]: time="2025-11-05T15:50:40.444278533Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 2.344565849s" Nov 5 15:50:40.444835 containerd[1598]: time="2025-11-05T15:50:40.444323485Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:50:40.444835 containerd[1598]: time="2025-11-05T15:50:40.444628000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:40.445749 containerd[1598]: time="2025-11-05T15:50:40.445526107Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:50:40.447819 systemd-resolved[1287]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 5 15:50:41.048197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount839097517.mount: Deactivated successfully. Nov 5 15:50:42.225524 containerd[1598]: time="2025-11-05T15:50:42.224012111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:42.225524 containerd[1598]: time="2025-11-05T15:50:42.225246685Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 5 15:50:42.226588 containerd[1598]: time="2025-11-05T15:50:42.226535004Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:42.235272 containerd[1598]: time="2025-11-05T15:50:42.235157965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:42.236677 containerd[1598]: time="2025-11-05T15:50:42.236622293Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.791045427s" Nov 5 15:50:42.236877 containerd[1598]: time="2025-11-05T15:50:42.236852018Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:50:42.237800 containerd[1598]: time="2025-11-05T15:50:42.237770361Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:50:42.799242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1599785150.mount: Deactivated successfully. Nov 5 15:50:42.814436 containerd[1598]: time="2025-11-05T15:50:42.814337066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:42.816006 containerd[1598]: time="2025-11-05T15:50:42.815964516Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:50:42.816994 containerd[1598]: time="2025-11-05T15:50:42.816946400Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:42.820315 containerd[1598]: time="2025-11-05T15:50:42.820260918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:50:42.821960 containerd[1598]: time="2025-11-05T15:50:42.821901315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 583.938193ms" Nov 5 15:50:42.821960 containerd[1598]: time="2025-11-05T15:50:42.821946015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:50:42.822967 containerd[1598]: time="2025-11-05T15:50:42.822467962Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:50:43.422878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3336478104.mount: Deactivated successfully. Nov 5 15:50:43.506816 systemd-resolved[1287]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 5 15:50:45.422190 containerd[1598]: time="2025-11-05T15:50:45.422087122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:45.423857 containerd[1598]: time="2025-11-05T15:50:45.423815229Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 5 15:50:45.426501 containerd[1598]: time="2025-11-05T15:50:45.424426197Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:45.428006 containerd[1598]: time="2025-11-05T15:50:45.427961773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:50:45.429914 containerd[1598]: time="2025-11-05T15:50:45.429863744Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.607340144s" Nov 5 15:50:45.430003 containerd[1598]: time="2025-11-05T15:50:45.429921673Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:50:49.908548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:49.909420 systemd[1]: kubelet.service: Consumed 270ms CPU time, 108M memory peak. Nov 5 15:50:49.913537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:49.963467 systemd[1]: Reload requested from client PID 2294 ('systemctl') (unit session-7.scope)... Nov 5 15:50:49.963514 systemd[1]: Reloading... Nov 5 15:50:50.130552 zram_generator::config[2341]: No configuration found. Nov 5 15:50:50.902274 systemd[1]: Reloading finished in 938 ms. Nov 5 15:50:50.995565 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:51.001707 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:50:51.002506 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:51.002747 systemd[1]: kubelet.service: Consumed 154ms CPU time, 97.9M memory peak. Nov 5 15:50:51.006902 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:51.230261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:51.245137 (kubelet)[2394]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:51.318030 kubelet[2394]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:51.318605 kubelet[2394]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:51.318665 kubelet[2394]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:51.321391 kubelet[2394]: I1105 15:50:51.321317 2394 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:52.096188 kubelet[2394]: I1105 15:50:52.096133 2394 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:50:52.096419 kubelet[2394]: I1105 15:50:52.096403 2394 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:52.097235 kubelet[2394]: I1105 15:50:52.096944 2394 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:50:52.136275 kubelet[2394]: I1105 15:50:52.135529 2394 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:52.138032 kubelet[2394]: E1105 15:50:52.137528 2394 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://143.110.239.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:50:52.152071 kubelet[2394]: I1105 15:50:52.152023 2394 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:52.163627 kubelet[2394]: I1105 15:50:52.163565 2394 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:50:52.166368 kubelet[2394]: I1105 15:50:52.166281 2394 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:52.169538 kubelet[2394]: I1105 15:50:52.166350 2394 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-6-a291033793","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:52.170839 kubelet[2394]: I1105 15:50:52.170792 2394 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:52.170839 kubelet[2394]: I1105 15:50:52.170835 2394 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:50:52.171043 kubelet[2394]: I1105 15:50:52.170998 2394 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:52.173980 kubelet[2394]: I1105 15:50:52.173918 2394 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:50:52.174153 kubelet[2394]: I1105 15:50:52.173997 2394 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:52.174153 kubelet[2394]: I1105 15:50:52.174042 2394 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:50:52.174153 kubelet[2394]: I1105 15:50:52.174072 2394 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:52.182985 kubelet[2394]: E1105 15:50:52.182924 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.110.239.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-6-a291033793&limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:50:52.188363 kubelet[2394]: I1105 15:50:52.188302 2394 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:52.189053 kubelet[2394]: E1105 15:50:52.189008 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.110.239.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:50:52.189455 kubelet[2394]: I1105 15:50:52.189435 2394 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:50:52.190373 kubelet[2394]: W1105 15:50:52.190347 2394 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:50:52.198383 kubelet[2394]: I1105 15:50:52.198347 2394 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:50:52.198646 kubelet[2394]: I1105 15:50:52.198635 2394 server.go:1289] "Started kubelet" Nov 5 15:50:52.201305 kubelet[2394]: I1105 15:50:52.201171 2394 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:52.202338 kubelet[2394]: I1105 15:50:52.202299 2394 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:52.202621 kubelet[2394]: I1105 15:50:52.202607 2394 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:50:52.207655 kubelet[2394]: I1105 15:50:52.207594 2394 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:52.208011 kubelet[2394]: I1105 15:50:52.207995 2394 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:52.214102 kubelet[2394]: I1105 15:50:52.214057 2394 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:52.216815 kubelet[2394]: I1105 15:50:52.216780 2394 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:50:52.220389 kubelet[2394]: E1105 15:50:52.220039 2394 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-6-a291033793\" not found" Nov 5 15:50:52.220551 kubelet[2394]: I1105 15:50:52.220451 2394 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:50:52.220726 kubelet[2394]: E1105 15:50:52.217459 2394 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://143.110.239.237:6443/api/v1/namespaces/default/events\": dial tcp 143.110.239.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487.0.1-6-a291033793.18752721136abff7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487.0.1-6-a291033793,UID:ci-4487.0.1-6-a291033793,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487.0.1-6-a291033793,},FirstTimestamp:2025-11-05 15:50:52.198584311 +0000 UTC m=+0.945465844,LastTimestamp:2025-11-05 15:50:52.198584311 +0000 UTC m=+0.945465844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487.0.1-6-a291033793,}" Nov 5 15:50:52.221007 kubelet[2394]: I1105 15:50:52.220810 2394 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:50:52.221448 kubelet[2394]: E1105 15:50:52.221200 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.110.239.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:50:52.221448 kubelet[2394]: E1105 15:50:52.221292 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.239.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-6-a291033793?timeout=10s\": dial tcp 143.110.239.237:6443: connect: connection refused" interval="200ms" Nov 5 15:50:52.222138 kubelet[2394]: I1105 15:50:52.221613 2394 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:50:52.222138 kubelet[2394]: I1105 15:50:52.221678 2394 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:52.223247 kubelet[2394]: I1105 15:50:52.223164 2394 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:50:52.243194 kubelet[2394]: I1105 15:50:52.243117 2394 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:52.243506 kubelet[2394]: I1105 15:50:52.243148 2394 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:52.243658 kubelet[2394]: I1105 15:50:52.243582 2394 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:52.246869 kubelet[2394]: I1105 15:50:52.246844 2394 policy_none.go:49] "None policy: Start" Nov 5 15:50:52.247096 kubelet[2394]: I1105 15:50:52.247035 2394 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:50:52.247096 kubelet[2394]: I1105 15:50:52.247053 2394 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:50:52.252948 kubelet[2394]: I1105 15:50:52.252882 2394 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:52.254410 kubelet[2394]: I1105 15:50:52.254370 2394 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:52.254410 kubelet[2394]: I1105 15:50:52.254390 2394 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:50:52.254518 kubelet[2394]: I1105 15:50:52.254413 2394 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:52.254518 kubelet[2394]: I1105 15:50:52.254421 2394 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:50:52.254518 kubelet[2394]: E1105 15:50:52.254463 2394 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:52.257252 kubelet[2394]: E1105 15:50:52.257227 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.110.239.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:50:52.262845 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:50:52.275332 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:50:52.280667 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:50:52.299130 kubelet[2394]: E1105 15:50:52.298967 2394 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:50:52.300428 kubelet[2394]: I1105 15:50:52.299985 2394 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:52.300428 kubelet[2394]: I1105 15:50:52.300002 2394 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:52.300428 kubelet[2394]: I1105 15:50:52.300339 2394 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:52.303056 kubelet[2394]: E1105 15:50:52.303034 2394 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:52.303818 kubelet[2394]: E1105 15:50:52.303791 2394 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487.0.1-6-a291033793\" not found" Nov 5 15:50:52.368767 systemd[1]: Created slice kubepods-burstable-pod921026489d0bdcf0f8cea74064ff7986.slice - libcontainer container kubepods-burstable-pod921026489d0bdcf0f8cea74064ff7986.slice. Nov 5 15:50:52.379911 kubelet[2394]: E1105 15:50:52.379869 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.382825 systemd[1]: Created slice kubepods-burstable-pod181c9171eac62723bad0356b94c3a32c.slice - libcontainer container kubepods-burstable-pod181c9171eac62723bad0356b94c3a32c.slice. Nov 5 15:50:52.391143 kubelet[2394]: E1105 15:50:52.391097 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.396379 systemd[1]: Created slice kubepods-burstable-pod569f0a4cd0295db1192b8aed19d7228b.slice - libcontainer container kubepods-burstable-pod569f0a4cd0295db1192b8aed19d7228b.slice. Nov 5 15:50:52.398502 kubelet[2394]: E1105 15:50:52.398367 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.401748 kubelet[2394]: I1105 15:50:52.401711 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.402346 kubelet[2394]: E1105 15:50:52.402316 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.239.237:6443/api/v1/nodes\": dial tcp 143.110.239.237:6443: connect: connection refused" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.422830 kubelet[2394]: I1105 15:50:52.421456 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.422830 kubelet[2394]: I1105 15:50:52.421593 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.422830 kubelet[2394]: I1105 15:50:52.421642 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.422830 kubelet[2394]: I1105 15:50:52.421678 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.422830 kubelet[2394]: I1105 15:50:52.421703 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.423331 kubelet[2394]: I1105 15:50:52.421737 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/181c9171eac62723bad0356b94c3a32c-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-6-a291033793\" (UID: \"181c9171eac62723bad0356b94c3a32c\") " pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.423331 kubelet[2394]: I1105 15:50:52.421769 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.423331 kubelet[2394]: I1105 15:50:52.421801 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.423331 kubelet[2394]: I1105 15:50:52.421830 2394 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:52.430949 kubelet[2394]: E1105 15:50:52.430728 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.239.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-6-a291033793?timeout=10s\": dial tcp 143.110.239.237:6443: connect: connection refused" interval="400ms" Nov 5 15:50:52.604606 kubelet[2394]: I1105 15:50:52.604560 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.605052 kubelet[2394]: E1105 15:50:52.605011 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.239.237:6443/api/v1/nodes\": dial tcp 143.110.239.237:6443: connect: connection refused" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:52.681831 kubelet[2394]: E1105 15:50:52.681386 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:52.683961 containerd[1598]: time="2025-11-05T15:50:52.683824870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-6-a291033793,Uid:921026489d0bdcf0f8cea74064ff7986,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:52.692511 kubelet[2394]: E1105 15:50:52.692412 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:52.700399 kubelet[2394]: E1105 15:50:52.700329 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:52.700677 containerd[1598]: time="2025-11-05T15:50:52.700617700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-6-a291033793,Uid:181c9171eac62723bad0356b94c3a32c,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:52.701330 containerd[1598]: time="2025-11-05T15:50:52.701282483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-6-a291033793,Uid:569f0a4cd0295db1192b8aed19d7228b,Namespace:kube-system,Attempt:0,}" Nov 5 15:50:52.832133 kubelet[2394]: E1105 15:50:52.832068 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.239.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-6-a291033793?timeout=10s\": dial tcp 143.110.239.237:6443: connect: connection refused" interval="800ms" Nov 5 15:50:52.835320 containerd[1598]: time="2025-11-05T15:50:52.835245241Z" level=info msg="connecting to shim a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f" address="unix:///run/containerd/s/fe7ac7bad7644e587e189c05936cd3bcf1ebccb6478e2d51ae78921a1b610018" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:52.837505 containerd[1598]: time="2025-11-05T15:50:52.836978523Z" level=info msg="connecting to shim 1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072" address="unix:///run/containerd/s/8e315b17204a8caa4c9835cb128fc0953e0fd762a2ef6d7fc7e4057e8ab0276d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:52.853149 containerd[1598]: time="2025-11-05T15:50:52.852860904Z" level=info msg="connecting to shim 87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712" address="unix:///run/containerd/s/e5a6da370bd5b6646d55d179f58f580317cabf1b710f488d90ca5e8e5b2e46f3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:50:52.969850 systemd[1]: Started cri-containerd-1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072.scope - libcontainer container 1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072. Nov 5 15:50:52.973210 systemd[1]: Started cri-containerd-87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712.scope - libcontainer container 87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712. Nov 5 15:50:52.975228 systemd[1]: Started cri-containerd-a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f.scope - libcontainer container a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f. Nov 5 15:50:53.009389 kubelet[2394]: I1105 15:50:53.009345 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:53.010672 kubelet[2394]: E1105 15:50:53.010613 2394 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://143.110.239.237:6443/api/v1/nodes\": dial tcp 143.110.239.237:6443: connect: connection refused" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:53.084503 kubelet[2394]: E1105 15:50:53.083605 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://143.110.239.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487.0.1-6-a291033793&limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:50:53.092407 containerd[1598]: time="2025-11-05T15:50:53.092296185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487.0.1-6-a291033793,Uid:181c9171eac62723bad0356b94c3a32c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f\"" Nov 5 15:50:53.094908 kubelet[2394]: E1105 15:50:53.094833 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:53.104961 containerd[1598]: time="2025-11-05T15:50:53.104739674Z" level=info msg="CreateContainer within sandbox \"a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:50:53.108346 containerd[1598]: time="2025-11-05T15:50:53.106043293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487.0.1-6-a291033793,Uid:569f0a4cd0295db1192b8aed19d7228b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072\"" Nov 5 15:50:53.109740 kubelet[2394]: E1105 15:50:53.109675 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:53.116774 containerd[1598]: time="2025-11-05T15:50:53.116675070Z" level=info msg="CreateContainer within sandbox \"1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:50:53.128951 containerd[1598]: time="2025-11-05T15:50:53.128879636Z" level=info msg="Container c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:53.131500 containerd[1598]: time="2025-11-05T15:50:53.131433200Z" level=info msg="Container 67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:53.133452 containerd[1598]: time="2025-11-05T15:50:53.133318180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487.0.1-6-a291033793,Uid:921026489d0bdcf0f8cea74064ff7986,Namespace:kube-system,Attempt:0,} returns sandbox id \"87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712\"" Nov 5 15:50:53.134285 kubelet[2394]: E1105 15:50:53.134250 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:53.142501 containerd[1598]: time="2025-11-05T15:50:53.141913474Z" level=info msg="CreateContainer within sandbox \"87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:50:53.149141 containerd[1598]: time="2025-11-05T15:50:53.149059742Z" level=info msg="CreateContainer within sandbox \"a6d2d1b81520991132e270e75092a7933bc6ec5715477a97c239847f9ab8d61f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4\"" Nov 5 15:50:53.151432 containerd[1598]: time="2025-11-05T15:50:53.151389184Z" level=info msg="StartContainer for \"c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4\"" Nov 5 15:50:53.153572 containerd[1598]: time="2025-11-05T15:50:53.153530290Z" level=info msg="connecting to shim c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4" address="unix:///run/containerd/s/fe7ac7bad7644e587e189c05936cd3bcf1ebccb6478e2d51ae78921a1b610018" protocol=ttrpc version=3 Nov 5 15:50:53.159020 containerd[1598]: time="2025-11-05T15:50:53.158971108Z" level=info msg="CreateContainer within sandbox \"1fe3537a28139049a7559b2a8009fa90569556d654aec7f93b99c747203d1072\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29\"" Nov 5 15:50:53.160223 containerd[1598]: time="2025-11-05T15:50:53.159812492Z" level=info msg="StartContainer for \"67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29\"" Nov 5 15:50:53.160358 containerd[1598]: time="2025-11-05T15:50:53.160328832Z" level=info msg="Container a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:50:53.162291 containerd[1598]: time="2025-11-05T15:50:53.162251434Z" level=info msg="connecting to shim 67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29" address="unix:///run/containerd/s/8e315b17204a8caa4c9835cb128fc0953e0fd762a2ef6d7fc7e4057e8ab0276d" protocol=ttrpc version=3 Nov 5 15:50:53.170268 containerd[1598]: time="2025-11-05T15:50:53.170212832Z" level=info msg="CreateContainer within sandbox \"87f4688b6ccd1f939adc6eb890a9049a27f54e17399747f68372e287af730712\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd\"" Nov 5 15:50:53.171223 containerd[1598]: time="2025-11-05T15:50:53.171167006Z" level=info msg="StartContainer for \"a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd\"" Nov 5 15:50:53.174233 containerd[1598]: time="2025-11-05T15:50:53.174134444Z" level=info msg="connecting to shim a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd" address="unix:///run/containerd/s/e5a6da370bd5b6646d55d179f58f580317cabf1b710f488d90ca5e8e5b2e46f3" protocol=ttrpc version=3 Nov 5 15:50:53.188910 systemd[1]: Started cri-containerd-c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4.scope - libcontainer container c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4. Nov 5 15:50:53.210864 systemd[1]: Started cri-containerd-67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29.scope - libcontainer container 67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29. Nov 5 15:50:53.220368 kubelet[2394]: E1105 15:50:53.220046 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://143.110.239.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:50:53.230779 systemd[1]: Started cri-containerd-a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd.scope - libcontainer container a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd. Nov 5 15:50:53.238717 kubelet[2394]: E1105 15:50:53.238668 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://143.110.239.237:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:50:53.316649 containerd[1598]: time="2025-11-05T15:50:53.316597832Z" level=info msg="StartContainer for \"c3e04beff17d1f8e38c88de9a620d574f1bbd089b7b4b2434b149c740349b0f4\" returns successfully" Nov 5 15:50:53.354873 containerd[1598]: time="2025-11-05T15:50:53.354828173Z" level=info msg="StartContainer for \"67ab19aa0b6521023edbca5ee9b7dcb0ecbfa777cb913c24caf3ac953fbaec29\" returns successfully" Nov 5 15:50:53.377988 containerd[1598]: time="2025-11-05T15:50:53.377928796Z" level=info msg="StartContainer for \"a179dd2b6a1a3a7b00d23d73ea0db38ec39600f137ce8738b097dde149dfc7bd\" returns successfully" Nov 5 15:50:53.476065 kubelet[2394]: E1105 15:50:53.475614 2394 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://143.110.239.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 143.110.239.237:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:50:53.633316 kubelet[2394]: E1105 15:50:53.633241 2394 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://143.110.239.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487.0.1-6-a291033793?timeout=10s\": dial tcp 143.110.239.237:6443: connect: connection refused" interval="1.6s" Nov 5 15:50:53.812731 kubelet[2394]: I1105 15:50:53.812091 2394 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:54.292046 kubelet[2394]: E1105 15:50:54.291778 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:54.292709 kubelet[2394]: E1105 15:50:54.292409 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:54.298169 kubelet[2394]: E1105 15:50:54.298133 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:54.298300 kubelet[2394]: E1105 15:50:54.298285 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:54.299232 kubelet[2394]: E1105 15:50:54.299157 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:54.299431 kubelet[2394]: E1105 15:50:54.299410 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:55.300177 kubelet[2394]: E1105 15:50:55.300136 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:55.301077 kubelet[2394]: E1105 15:50:55.300762 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:55.301077 kubelet[2394]: E1105 15:50:55.300820 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:55.301077 kubelet[2394]: E1105 15:50:55.300547 2394 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:55.301077 kubelet[2394]: E1105 15:50:55.300905 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:55.301077 kubelet[2394]: E1105 15:50:55.300978 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:55.993142 kubelet[2394]: E1105 15:50:55.992003 2394 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487.0.1-6-a291033793\" not found" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:56.066007 kubelet[2394]: I1105 15:50:56.065748 2394 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:56.066007 kubelet[2394]: E1105 15:50:56.065813 2394 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487.0.1-6-a291033793\": node \"ci-4487.0.1-6-a291033793\" not found" Nov 5 15:50:56.080769 kubelet[2394]: E1105 15:50:56.080718 2394 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487.0.1-6-a291033793\" not found" Nov 5 15:50:56.191709 kubelet[2394]: I1105 15:50:56.191316 2394 apiserver.go:52] "Watching apiserver" Nov 5 15:50:56.220617 kubelet[2394]: I1105 15:50:56.220543 2394 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.220933 kubelet[2394]: I1105 15:50:56.220866 2394 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:50:56.229225 kubelet[2394]: E1105 15:50:56.229174 2394 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-6-a291033793\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.229225 kubelet[2394]: I1105 15:50:56.229221 2394 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.232970 kubelet[2394]: E1105 15:50:56.232920 2394 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487.0.1-6-a291033793\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.232970 kubelet[2394]: I1105 15:50:56.232960 2394 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.235493 kubelet[2394]: E1105 15:50:56.235449 2394 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-6-a291033793\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.307294 kubelet[2394]: I1105 15:50:56.307126 2394 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.308512 kubelet[2394]: I1105 15:50:56.307446 2394 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.311359 kubelet[2394]: E1105 15:50:56.311246 2394 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-6-a291033793\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.311761 kubelet[2394]: E1105 15:50:56.311691 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:56.313223 kubelet[2394]: E1105 15:50:56.313183 2394 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487.0.1-6-a291033793\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:56.313461 kubelet[2394]: E1105 15:50:56.313427 2394 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:58.235386 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-7.scope)... Nov 5 15:50:58.235418 systemd[1]: Reloading... Nov 5 15:50:58.354542 zram_generator::config[2714]: No configuration found. Nov 5 15:50:58.650860 systemd[1]: Reloading finished in 414 ms. Nov 5 15:50:58.690611 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:58.709931 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:50:58.710238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:58.710332 systemd[1]: kubelet.service: Consumed 1.442s CPU time, 128.4M memory peak. Nov 5 15:50:58.713775 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:58.886946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:58.899420 (kubelet)[2769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:50:58.970583 kubelet[2769]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:58.970583 kubelet[2769]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:50:58.970583 kubelet[2769]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:50:58.970583 kubelet[2769]: I1105 15:50:58.969899 2769 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:50:58.980518 kubelet[2769]: I1105 15:50:58.980439 2769 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:50:58.980518 kubelet[2769]: I1105 15:50:58.980495 2769 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:50:58.980913 kubelet[2769]: I1105 15:50:58.980878 2769 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:50:58.982792 kubelet[2769]: I1105 15:50:58.982755 2769 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:50:58.989427 kubelet[2769]: I1105 15:50:58.989152 2769 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:50:58.994071 kubelet[2769]: I1105 15:50:58.994031 2769 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:50:58.998543 kubelet[2769]: I1105 15:50:58.998116 2769 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:50:58.998543 kubelet[2769]: I1105 15:50:58.998385 2769 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:50:58.998736 kubelet[2769]: I1105 15:50:58.998411 2769 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487.0.1-6-a291033793","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:50:58.998736 kubelet[2769]: I1105 15:50:58.998666 2769 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:50:58.998736 kubelet[2769]: I1105 15:50:58.998681 2769 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:50:58.998964 kubelet[2769]: I1105 15:50:58.998756 2769 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:58.999050 kubelet[2769]: I1105 15:50:58.999015 2769 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:50:58.999050 kubelet[2769]: I1105 15:50:58.999035 2769 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:50:58.999627 kubelet[2769]: I1105 15:50:58.999066 2769 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:50:58.999627 kubelet[2769]: I1105 15:50:58.999087 2769 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:50:59.002741 kubelet[2769]: I1105 15:50:59.002707 2769 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:50:59.006498 kubelet[2769]: I1105 15:50:59.003775 2769 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:50:59.010559 kubelet[2769]: I1105 15:50:59.010453 2769 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:50:59.012496 kubelet[2769]: I1105 15:50:59.011706 2769 server.go:1289] "Started kubelet" Nov 5 15:50:59.015942 kubelet[2769]: I1105 15:50:59.015853 2769 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:50:59.019514 kubelet[2769]: I1105 15:50:59.018300 2769 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:50:59.019514 kubelet[2769]: I1105 15:50:59.019409 2769 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:50:59.034500 kubelet[2769]: I1105 15:50:59.034341 2769 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:50:59.037970 kubelet[2769]: I1105 15:50:59.037265 2769 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:50:59.054968 kubelet[2769]: I1105 15:50:59.054934 2769 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:50:59.057056 kubelet[2769]: I1105 15:50:59.057031 2769 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:50:59.059438 kubelet[2769]: I1105 15:50:59.058982 2769 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:50:59.059803 kubelet[2769]: I1105 15:50:59.059786 2769 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:50:59.062442 kubelet[2769]: I1105 15:50:59.062159 2769 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:50:59.062886 kubelet[2769]: I1105 15:50:59.062846 2769 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:50:59.063416 kubelet[2769]: E1105 15:50:59.063380 2769 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:50:59.067289 kubelet[2769]: I1105 15:50:59.067062 2769 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:50:59.071528 kubelet[2769]: I1105 15:50:59.071354 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:50:59.074311 kubelet[2769]: I1105 15:50:59.073159 2769 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:50:59.074311 kubelet[2769]: I1105 15:50:59.073194 2769 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:50:59.074311 kubelet[2769]: I1105 15:50:59.073225 2769 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:50:59.074311 kubelet[2769]: I1105 15:50:59.073235 2769 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:50:59.074311 kubelet[2769]: E1105 15:50:59.073330 2769 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:50:59.148601 kubelet[2769]: I1105 15:50:59.148569 2769 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:50:59.148601 kubelet[2769]: I1105 15:50:59.148590 2769 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:50:59.148601 kubelet[2769]: I1105 15:50:59.148613 2769 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:50:59.148838 kubelet[2769]: I1105 15:50:59.148748 2769 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:50:59.148838 kubelet[2769]: I1105 15:50:59.148758 2769 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:50:59.148838 kubelet[2769]: I1105 15:50:59.148773 2769 policy_none.go:49] "None policy: Start" Nov 5 15:50:59.148838 kubelet[2769]: I1105 15:50:59.148785 2769 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:50:59.148838 kubelet[2769]: I1105 15:50:59.148795 2769 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:50:59.148964 kubelet[2769]: I1105 15:50:59.148878 2769 state_mem.go:75] "Updated machine memory state" Nov 5 15:50:59.153772 kubelet[2769]: E1105 15:50:59.153727 2769 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:50:59.153940 kubelet[2769]: I1105 15:50:59.153913 2769 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:50:59.154004 kubelet[2769]: I1105 15:50:59.153930 2769 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:50:59.154618 kubelet[2769]: I1105 15:50:59.154521 2769 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:50:59.158352 kubelet[2769]: E1105 15:50:59.158321 2769 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:50:59.176627 kubelet[2769]: I1105 15:50:59.176546 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.181522 kubelet[2769]: I1105 15:50:59.177341 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.181522 kubelet[2769]: I1105 15:50:59.180872 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.196063 kubelet[2769]: I1105 15:50:59.194794 2769 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:59.202962 kubelet[2769]: I1105 15:50:59.202924 2769 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:59.204072 kubelet[2769]: I1105 15:50:59.204035 2769 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:50:59.260205 kubelet[2769]: I1105 15:50:59.260057 2769 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267108 kubelet[2769]: I1105 15:50:59.266749 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-flexvolume-dir\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267108 kubelet[2769]: I1105 15:50:59.266804 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-kubeconfig\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267108 kubelet[2769]: I1105 15:50:59.266838 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/181c9171eac62723bad0356b94c3a32c-kubeconfig\") pod \"kube-scheduler-ci-4487.0.1-6-a291033793\" (UID: \"181c9171eac62723bad0356b94c3a32c\") " pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267108 kubelet[2769]: I1105 15:50:59.266865 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-ca-certs\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267108 kubelet[2769]: I1105 15:50:59.266889 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-k8s-certs\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267423 kubelet[2769]: I1105 15:50:59.266913 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-k8s-certs\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267423 kubelet[2769]: I1105 15:50:59.266942 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267423 kubelet[2769]: I1105 15:50:59.266974 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/921026489d0bdcf0f8cea74064ff7986-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487.0.1-6-a291033793\" (UID: \"921026489d0bdcf0f8cea74064ff7986\") " pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.267423 kubelet[2769]: I1105 15:50:59.266998 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/569f0a4cd0295db1192b8aed19d7228b-ca-certs\") pod \"kube-controller-manager-ci-4487.0.1-6-a291033793\" (UID: \"569f0a4cd0295db1192b8aed19d7228b\") " pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" Nov 5 15:50:59.279089 kubelet[2769]: I1105 15:50:59.279019 2769 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:59.279971 kubelet[2769]: I1105 15:50:59.279952 2769 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487.0.1-6-a291033793" Nov 5 15:50:59.495853 kubelet[2769]: E1105 15:50:59.495404 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:59.504600 kubelet[2769]: E1105 15:50:59.504542 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:50:59.507394 kubelet[2769]: E1105 15:50:59.507270 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:00.010131 kubelet[2769]: I1105 15:51:00.010086 2769 apiserver.go:52] "Watching apiserver" Nov 5 15:51:00.060121 kubelet[2769]: I1105 15:51:00.060062 2769 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:51:00.120126 kubelet[2769]: I1105 15:51:00.118398 2769 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:51:00.120126 kubelet[2769]: E1105 15:51:00.118398 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:00.122314 kubelet[2769]: E1105 15:51:00.121788 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:00.137307 kubelet[2769]: I1105 15:51:00.137223 2769 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 5 15:51:00.137307 kubelet[2769]: E1105 15:51:00.137318 2769 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487.0.1-6-a291033793\" already exists" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" Nov 5 15:51:00.140370 kubelet[2769]: E1105 15:51:00.137576 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:00.196768 kubelet[2769]: I1105 15:51:00.196628 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487.0.1-6-a291033793" podStartSLOduration=1.196594617 podStartE2EDuration="1.196594617s" podCreationTimestamp="2025-11-05 15:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:00.178051473 +0000 UTC m=+1.270521881" watchObservedRunningTime="2025-11-05 15:51:00.196594617 +0000 UTC m=+1.289065031" Nov 5 15:51:00.197736 kubelet[2769]: I1105 15:51:00.197631 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487.0.1-6-a291033793" podStartSLOduration=1.197611136 podStartE2EDuration="1.197611136s" podCreationTimestamp="2025-11-05 15:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:00.196429392 +0000 UTC m=+1.288899805" watchObservedRunningTime="2025-11-05 15:51:00.197611136 +0000 UTC m=+1.290081548" Nov 5 15:51:00.215576 kubelet[2769]: I1105 15:51:00.213778 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487.0.1-6-a291033793" podStartSLOduration=1.213754892 podStartE2EDuration="1.213754892s" podCreationTimestamp="2025-11-05 15:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:00.21330582 +0000 UTC m=+1.305776228" watchObservedRunningTime="2025-11-05 15:51:00.213754892 +0000 UTC m=+1.306225310" Nov 5 15:51:01.121736 kubelet[2769]: E1105 15:51:01.121467 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:01.121736 kubelet[2769]: E1105 15:51:01.122186 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:01.697082 systemd-resolved[1287]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 5 15:51:02.556906 systemd-resolved[1287]: Clock change detected. Flushing caches. Nov 5 15:51:02.557313 systemd-timesyncd[1463]: Contacted time server 74.208.25.46:123 (2.flatcar.pool.ntp.org). Nov 5 15:51:02.557398 systemd-timesyncd[1463]: Initial clock synchronization to Wed 2025-11-05 15:51:02.556621 UTC. Nov 5 15:51:03.019112 kubelet[2769]: E1105 15:51:03.019060 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:03.935492 kubelet[2769]: E1105 15:51:03.935385 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:05.010368 kubelet[2769]: I1105 15:51:05.010212 2769 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:51:05.010911 kubelet[2769]: I1105 15:51:05.010813 2769 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:51:05.010947 containerd[1598]: time="2025-11-05T15:51:05.010582976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:51:05.908131 systemd[1]: Created slice kubepods-besteffort-pod2aa8fe4c_108e_4afa_819a_ee4a82f34ba7.slice - libcontainer container kubepods-besteffort-pod2aa8fe4c_108e_4afa_819a_ee4a82f34ba7.slice. Nov 5 15:51:05.919866 kubelet[2769]: I1105 15:51:05.919807 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2aa8fe4c-108e-4afa-819a-ee4a82f34ba7-kube-proxy\") pod \"kube-proxy-gvrpx\" (UID: \"2aa8fe4c-108e-4afa-819a-ee4a82f34ba7\") " pod="kube-system/kube-proxy-gvrpx" Nov 5 15:51:05.920247 kubelet[2769]: I1105 15:51:05.920174 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aa8fe4c-108e-4afa-819a-ee4a82f34ba7-xtables-lock\") pod \"kube-proxy-gvrpx\" (UID: \"2aa8fe4c-108e-4afa-819a-ee4a82f34ba7\") " pod="kube-system/kube-proxy-gvrpx" Nov 5 15:51:05.920700 kubelet[2769]: I1105 15:51:05.920507 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2aa8fe4c-108e-4afa-819a-ee4a82f34ba7-lib-modules\") pod \"kube-proxy-gvrpx\" (UID: \"2aa8fe4c-108e-4afa-819a-ee4a82f34ba7\") " pod="kube-system/kube-proxy-gvrpx" Nov 5 15:51:05.920700 kubelet[2769]: I1105 15:51:05.920547 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rcwq\" (UniqueName: \"kubernetes.io/projected/2aa8fe4c-108e-4afa-819a-ee4a82f34ba7-kube-api-access-6rcwq\") pod \"kube-proxy-gvrpx\" (UID: \"2aa8fe4c-108e-4afa-819a-ee4a82f34ba7\") " pod="kube-system/kube-proxy-gvrpx" Nov 5 15:51:06.217060 kubelet[2769]: E1105 15:51:06.216631 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:06.219673 containerd[1598]: time="2025-11-05T15:51:06.219525668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvrpx,Uid:2aa8fe4c-108e-4afa-819a-ee4a82f34ba7,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:06.262512 containerd[1598]: time="2025-11-05T15:51:06.260867576Z" level=info msg="connecting to shim 50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96" address="unix:///run/containerd/s/6269f91f6f743d1674b9d30e766ba45ffa8f686aa7827d93fbb9b6d5ff938409" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:06.305023 systemd[1]: Started cri-containerd-50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96.scope - libcontainer container 50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96. Nov 5 15:51:06.336209 systemd[1]: Created slice kubepods-besteffort-poddfbb43d0_cfa8_4d3a_a04e_456be1faf9ee.slice - libcontainer container kubepods-besteffort-poddfbb43d0_cfa8_4d3a_a04e_456be1faf9ee.slice. Nov 5 15:51:06.377281 containerd[1598]: time="2025-11-05T15:51:06.377187109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gvrpx,Uid:2aa8fe4c-108e-4afa-819a-ee4a82f34ba7,Namespace:kube-system,Attempt:0,} returns sandbox id \"50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96\"" Nov 5 15:51:06.379011 kubelet[2769]: E1105 15:51:06.378818 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:06.387422 containerd[1598]: time="2025-11-05T15:51:06.387346786Z" level=info msg="CreateContainer within sandbox \"50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:51:06.404669 containerd[1598]: time="2025-11-05T15:51:06.402799664Z" level=info msg="Container 0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:06.417335 containerd[1598]: time="2025-11-05T15:51:06.417254614Z" level=info msg="CreateContainer within sandbox \"50bf33e3fae6c2f64b1e8beb1c5003b0e8620409cc622cffa2ee168cb76f4a96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008\"" Nov 5 15:51:06.418500 containerd[1598]: time="2025-11-05T15:51:06.418449737Z" level=info msg="StartContainer for \"0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008\"" Nov 5 15:51:06.424065 containerd[1598]: time="2025-11-05T15:51:06.423540556Z" level=info msg="connecting to shim 0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008" address="unix:///run/containerd/s/6269f91f6f743d1674b9d30e766ba45ffa8f686aa7827d93fbb9b6d5ff938409" protocol=ttrpc version=3 Nov 5 15:51:06.424254 kubelet[2769]: I1105 15:51:06.423934 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2z46\" (UniqueName: \"kubernetes.io/projected/dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee-kube-api-access-b2z46\") pod \"tigera-operator-7dcd859c48-gphvb\" (UID: \"dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee\") " pod="tigera-operator/tigera-operator-7dcd859c48-gphvb" Nov 5 15:51:06.424254 kubelet[2769]: I1105 15:51:06.423988 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee-var-lib-calico\") pod \"tigera-operator-7dcd859c48-gphvb\" (UID: \"dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee\") " pod="tigera-operator/tigera-operator-7dcd859c48-gphvb" Nov 5 15:51:06.459062 systemd[1]: Started cri-containerd-0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008.scope - libcontainer container 0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008. Nov 5 15:51:06.527372 containerd[1598]: time="2025-11-05T15:51:06.527319560Z" level=info msg="StartContainer for \"0350a1b27f490db5b009df158601ce79d866d1f1504e5ac1548896c7b014c008\" returns successfully" Nov 5 15:51:06.641661 containerd[1598]: time="2025-11-05T15:51:06.641596325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gphvb,Uid:dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:51:06.674720 containerd[1598]: time="2025-11-05T15:51:06.674498537Z" level=info msg="connecting to shim 9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67" address="unix:///run/containerd/s/56cab4bfd2e4bca657503b03d35a441013e26418bee2317f7bc1c511d008da84" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:06.712003 systemd[1]: Started cri-containerd-9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67.scope - libcontainer container 9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67. Nov 5 15:51:06.811686 containerd[1598]: time="2025-11-05T15:51:06.811536679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-gphvb,Uid:dfbb43d0-cfa8-4d3a-a04e-456be1faf9ee,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67\"" Nov 5 15:51:06.817954 containerd[1598]: time="2025-11-05T15:51:06.817820391Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:51:06.950102 kubelet[2769]: E1105 15:51:06.950052 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:06.968430 kubelet[2769]: I1105 15:51:06.968107 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvrpx" podStartSLOduration=1.968080746 podStartE2EDuration="1.968080746s" podCreationTimestamp="2025-11-05 15:51:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:06.967113602 +0000 UTC m=+7.250420188" watchObservedRunningTime="2025-11-05 15:51:06.968080746 +0000 UTC m=+7.251387336" Nov 5 15:51:07.241395 kubelet[2769]: E1105 15:51:07.241219 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:07.299554 kubelet[2769]: E1105 15:51:07.299467 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:07.963138 kubelet[2769]: E1105 15:51:07.963097 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:07.963653 kubelet[2769]: E1105 15:51:07.963595 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:08.255371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount759764535.mount: Deactivated successfully. Nov 5 15:51:08.966509 kubelet[2769]: E1105 15:51:08.966467 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:08.969278 kubelet[2769]: E1105 15:51:08.968303 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:09.225134 containerd[1598]: time="2025-11-05T15:51:09.224962306Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:09.226938 containerd[1598]: time="2025-11-05T15:51:09.226676727Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:51:09.228270 containerd[1598]: time="2025-11-05T15:51:09.228221882Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:09.230760 containerd[1598]: time="2025-11-05T15:51:09.230621623Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:09.232988 containerd[1598]: time="2025-11-05T15:51:09.231869636Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.41369705s" Nov 5 15:51:09.232988 containerd[1598]: time="2025-11-05T15:51:09.231928627Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:51:09.243552 containerd[1598]: time="2025-11-05T15:51:09.243221579Z" level=info msg="CreateContainer within sandbox \"9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:51:09.252925 containerd[1598]: time="2025-11-05T15:51:09.252865777Z" level=info msg="Container 49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:09.265284 containerd[1598]: time="2025-11-05T15:51:09.265207524Z" level=info msg="CreateContainer within sandbox \"9d007d6870e73bc47acc6e1242f8c4e61864083bdef10fc981a8837f6c677e67\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe\"" Nov 5 15:51:09.266275 containerd[1598]: time="2025-11-05T15:51:09.266223741Z" level=info msg="StartContainer for \"49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe\"" Nov 5 15:51:09.268819 containerd[1598]: time="2025-11-05T15:51:09.268768042Z" level=info msg="connecting to shim 49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe" address="unix:///run/containerd/s/56cab4bfd2e4bca657503b03d35a441013e26418bee2317f7bc1c511d008da84" protocol=ttrpc version=3 Nov 5 15:51:09.309953 systemd[1]: Started cri-containerd-49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe.scope - libcontainer container 49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe. Nov 5 15:51:09.362450 containerd[1598]: time="2025-11-05T15:51:09.362363989Z" level=info msg="StartContainer for \"49ad6835ed4aaf82b5d06f2a580e098e4d2c97559a5340c178617424397952fe\" returns successfully" Nov 5 15:51:12.715138 update_engine[1575]: I20251105 15:51:12.715027 1575 update_attempter.cc:509] Updating boot flags... Nov 5 15:51:16.851226 sudo[1830]: pam_unix(sudo:session): session closed for user root Nov 5 15:51:16.859682 sshd[1829]: Connection closed by 139.178.68.195 port 37074 Nov 5 15:51:16.858999 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:16.866064 systemd[1]: sshd@6-143.110.239.237:22-139.178.68.195:37074.service: Deactivated successfully. Nov 5 15:51:16.871278 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:51:16.871481 systemd[1]: session-7.scope: Consumed 7.346s CPU time, 162.8M memory peak. Nov 5 15:51:16.875793 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:51:16.880464 systemd-logind[1574]: Removed session 7. Nov 5 15:51:23.185783 kubelet[2769]: I1105 15:51:23.185503 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-gphvb" podStartSLOduration=14.764540045 podStartE2EDuration="17.185478329s" podCreationTimestamp="2025-11-05 15:51:06 +0000 UTC" firstStartedPulling="2025-11-05 15:51:06.816462407 +0000 UTC m=+7.099768977" lastFinishedPulling="2025-11-05 15:51:09.237400681 +0000 UTC m=+9.520707261" observedRunningTime="2025-11-05 15:51:09.999177843 +0000 UTC m=+10.282484448" watchObservedRunningTime="2025-11-05 15:51:23.185478329 +0000 UTC m=+23.468785046" Nov 5 15:51:23.207806 systemd[1]: Created slice kubepods-besteffort-pod7375cb7a_8de7_46cf_8de3_87f10b8decd4.slice - libcontainer container kubepods-besteffort-pod7375cb7a_8de7_46cf_8de3_87f10b8decd4.slice. Nov 5 15:51:23.227674 kubelet[2769]: I1105 15:51:23.226961 2769 status_manager.go:895] "Failed to get status for pod" podUID="7375cb7a-8de7-46cf-8de3-87f10b8decd4" pod="calico-system/calico-typha-6f86f86f45-7fltc" err="pods \"calico-typha-6f86f86f45-7fltc\" is forbidden: User \"system:node:ci-4487.0.1-6-a291033793\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4487.0.1-6-a291033793' and this object" Nov 5 15:51:23.246618 kubelet[2769]: I1105 15:51:23.246495 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7375cb7a-8de7-46cf-8de3-87f10b8decd4-typha-certs\") pod \"calico-typha-6f86f86f45-7fltc\" (UID: \"7375cb7a-8de7-46cf-8de3-87f10b8decd4\") " pod="calico-system/calico-typha-6f86f86f45-7fltc" Nov 5 15:51:23.246618 kubelet[2769]: I1105 15:51:23.246542 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsrf8\" (UniqueName: \"kubernetes.io/projected/7375cb7a-8de7-46cf-8de3-87f10b8decd4-kube-api-access-vsrf8\") pod \"calico-typha-6f86f86f45-7fltc\" (UID: \"7375cb7a-8de7-46cf-8de3-87f10b8decd4\") " pod="calico-system/calico-typha-6f86f86f45-7fltc" Nov 5 15:51:23.246618 kubelet[2769]: I1105 15:51:23.246563 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7375cb7a-8de7-46cf-8de3-87f10b8decd4-tigera-ca-bundle\") pod \"calico-typha-6f86f86f45-7fltc\" (UID: \"7375cb7a-8de7-46cf-8de3-87f10b8decd4\") " pod="calico-system/calico-typha-6f86f86f45-7fltc" Nov 5 15:51:23.421594 systemd[1]: Created slice kubepods-besteffort-pod6b6b15ed_d69c_4c57_a1f3_c01e2a1405cf.slice - libcontainer container kubepods-besteffort-pod6b6b15ed_d69c_4c57_a1f3_c01e2a1405cf.slice. Nov 5 15:51:23.449536 kubelet[2769]: I1105 15:51:23.448864 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-node-certs\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449536 kubelet[2769]: I1105 15:51:23.448909 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-var-lib-calico\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449536 kubelet[2769]: I1105 15:51:23.448930 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-flexvol-driver-host\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449536 kubelet[2769]: I1105 15:51:23.448948 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-var-run-calico\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449536 kubelet[2769]: I1105 15:51:23.448966 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-cni-bin-dir\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449985 kubelet[2769]: I1105 15:51:23.448981 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-cni-log-dir\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449985 kubelet[2769]: I1105 15:51:23.449001 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-policysync\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449985 kubelet[2769]: I1105 15:51:23.449017 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-tigera-ca-bundle\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449985 kubelet[2769]: I1105 15:51:23.449034 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xttq\" (UniqueName: \"kubernetes.io/projected/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-kube-api-access-7xttq\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.449985 kubelet[2769]: I1105 15:51:23.449051 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-lib-modules\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.450184 kubelet[2769]: I1105 15:51:23.449068 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-xtables-lock\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.450184 kubelet[2769]: I1105 15:51:23.449085 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf-cni-net-dir\") pod \"calico-node-q2jkz\" (UID: \"6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf\") " pod="calico-system/calico-node-q2jkz" Nov 5 15:51:23.515154 kubelet[2769]: E1105 15:51:23.515073 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:23.516992 containerd[1598]: time="2025-11-05T15:51:23.516939643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f86f86f45-7fltc,Uid:7375cb7a-8de7-46cf-8de3-87f10b8decd4,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:23.531301 kubelet[2769]: E1105 15:51:23.530872 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:23.550760 kubelet[2769]: I1105 15:51:23.549694 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/be0a8e42-97b5-40e7-95d6-3baf83ea6dbb-varrun\") pod \"csi-node-driver-69q97\" (UID: \"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb\") " pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:23.550760 kubelet[2769]: I1105 15:51:23.549839 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be0a8e42-97b5-40e7-95d6-3baf83ea6dbb-kubelet-dir\") pod \"csi-node-driver-69q97\" (UID: \"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb\") " pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:23.550760 kubelet[2769]: I1105 15:51:23.549889 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/be0a8e42-97b5-40e7-95d6-3baf83ea6dbb-socket-dir\") pod \"csi-node-driver-69q97\" (UID: \"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb\") " pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:23.550760 kubelet[2769]: I1105 15:51:23.549909 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn99s\" (UniqueName: \"kubernetes.io/projected/be0a8e42-97b5-40e7-95d6-3baf83ea6dbb-kube-api-access-qn99s\") pod \"csi-node-driver-69q97\" (UID: \"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb\") " pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:23.550760 kubelet[2769]: I1105 15:51:23.549956 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/be0a8e42-97b5-40e7-95d6-3baf83ea6dbb-registration-dir\") pod \"csi-node-driver-69q97\" (UID: \"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb\") " pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:23.571028 kubelet[2769]: E1105 15:51:23.570816 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.571028 kubelet[2769]: W1105 15:51:23.570949 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.571028 kubelet[2769]: E1105 15:51:23.570982 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.575627 kubelet[2769]: E1105 15:51:23.573720 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.575627 kubelet[2769]: W1105 15:51:23.573753 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.575627 kubelet[2769]: E1105 15:51:23.573784 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.575627 kubelet[2769]: E1105 15:51:23.574736 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.575627 kubelet[2769]: W1105 15:51:23.574755 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.575627 kubelet[2769]: E1105 15:51:23.574780 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.576268 kubelet[2769]: E1105 15:51:23.576243 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.576268 kubelet[2769]: W1105 15:51:23.576266 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.576624 kubelet[2769]: E1105 15:51:23.576308 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.580690 kubelet[2769]: E1105 15:51:23.578731 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.580690 kubelet[2769]: W1105 15:51:23.578756 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.580690 kubelet[2769]: E1105 15:51:23.578780 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.580690 kubelet[2769]: E1105 15:51:23.579714 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.580690 kubelet[2769]: W1105 15:51:23.579732 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.580690 kubelet[2769]: E1105 15:51:23.579752 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.599567 containerd[1598]: time="2025-11-05T15:51:23.599495921Z" level=info msg="connecting to shim b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9" address="unix:///run/containerd/s/bf131044274180bfb9f4b6a7dac987c5f1dc3412934daad614f44734c9bbc424" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:23.624340 kubelet[2769]: E1105 15:51:23.622628 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.624571 kubelet[2769]: W1105 15:51:23.624358 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.624571 kubelet[2769]: E1105 15:51:23.624397 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.652952 kubelet[2769]: E1105 15:51:23.652906 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.652952 kubelet[2769]: W1105 15:51:23.652945 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.653162 kubelet[2769]: E1105 15:51:23.652978 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.654471 kubelet[2769]: E1105 15:51:23.654412 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.654471 kubelet[2769]: W1105 15:51:23.654442 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.654471 kubelet[2769]: E1105 15:51:23.654468 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.656912 kubelet[2769]: E1105 15:51:23.656832 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.656912 kubelet[2769]: W1105 15:51:23.656855 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.656912 kubelet[2769]: E1105 15:51:23.656878 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.658979 kubelet[2769]: E1105 15:51:23.658877 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.658979 kubelet[2769]: W1105 15:51:23.658903 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.658979 kubelet[2769]: E1105 15:51:23.658924 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.659229 kubelet[2769]: E1105 15:51:23.659081 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.659229 kubelet[2769]: W1105 15:51:23.659088 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.659229 kubelet[2769]: E1105 15:51:23.659096 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.659229 kubelet[2769]: E1105 15:51:23.659215 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.659229 kubelet[2769]: W1105 15:51:23.659221 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.659229 kubelet[2769]: E1105 15:51:23.659229 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.659648 kubelet[2769]: E1105 15:51:23.659422 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.659648 kubelet[2769]: W1105 15:51:23.659428 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.659648 kubelet[2769]: E1105 15:51:23.659436 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.659814 kubelet[2769]: E1105 15:51:23.659792 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.659814 kubelet[2769]: W1105 15:51:23.659807 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.659990 kubelet[2769]: E1105 15:51:23.659818 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.660070 kubelet[2769]: E1105 15:51:23.660043 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.660070 kubelet[2769]: W1105 15:51:23.660055 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.660070 kubelet[2769]: E1105 15:51:23.660064 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.660846 kubelet[2769]: E1105 15:51:23.660827 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.660846 kubelet[2769]: W1105 15:51:23.660841 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.660958 kubelet[2769]: E1105 15:51:23.660863 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.662117 kubelet[2769]: E1105 15:51:23.662095 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.662117 kubelet[2769]: W1105 15:51:23.662113 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.662203 kubelet[2769]: E1105 15:51:23.662127 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.662661 kubelet[2769]: E1105 15:51:23.662623 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.663788 kubelet[2769]: W1105 15:51:23.663725 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.663788 kubelet[2769]: E1105 15:51:23.663752 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.664168 kubelet[2769]: E1105 15:51:23.664154 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.664168 kubelet[2769]: W1105 15:51:23.664167 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.664246 kubelet[2769]: E1105 15:51:23.664180 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.664493 kubelet[2769]: E1105 15:51:23.664477 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.664493 kubelet[2769]: W1105 15:51:23.664491 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.664607 kubelet[2769]: E1105 15:51:23.664529 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.666002 kubelet[2769]: E1105 15:51:23.665939 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.666002 kubelet[2769]: W1105 15:51:23.665998 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.666183 kubelet[2769]: E1105 15:51:23.666013 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.666290 kubelet[2769]: E1105 15:51:23.666265 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.666290 kubelet[2769]: W1105 15:51:23.666279 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.666352 kubelet[2769]: E1105 15:51:23.666291 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.666541 kubelet[2769]: E1105 15:51:23.666528 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.666541 kubelet[2769]: W1105 15:51:23.666540 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.666607 kubelet[2769]: E1105 15:51:23.666549 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.666844 kubelet[2769]: E1105 15:51:23.666827 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.666844 kubelet[2769]: W1105 15:51:23.666842 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.666928 kubelet[2769]: E1105 15:51:23.666851 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.667836 kubelet[2769]: E1105 15:51:23.667815 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.667836 kubelet[2769]: W1105 15:51:23.667833 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.667941 kubelet[2769]: E1105 15:51:23.667846 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.668224 kubelet[2769]: E1105 15:51:23.668210 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.668224 kubelet[2769]: W1105 15:51:23.668223 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.668307 kubelet[2769]: E1105 15:51:23.668234 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.668730 kubelet[2769]: E1105 15:51:23.668710 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.668730 kubelet[2769]: W1105 15:51:23.668727 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.668905 kubelet[2769]: E1105 15:51:23.668738 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.669350 kubelet[2769]: E1105 15:51:23.669332 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.669350 kubelet[2769]: W1105 15:51:23.669348 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.669718 kubelet[2769]: E1105 15:51:23.669359 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.669718 kubelet[2769]: E1105 15:51:23.669707 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.669786 kubelet[2769]: W1105 15:51:23.669763 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.669786 kubelet[2769]: E1105 15:51:23.669775 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.670677 kubelet[2769]: E1105 15:51:23.670628 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.670677 kubelet[2769]: W1105 15:51:23.670674 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.670778 kubelet[2769]: E1105 15:51:23.670686 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.671081 kubelet[2769]: E1105 15:51:23.671066 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.671081 kubelet[2769]: W1105 15:51:23.671078 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.671154 kubelet[2769]: E1105 15:51:23.671088 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.696503 systemd[1]: Started cri-containerd-b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9.scope - libcontainer container b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9. Nov 5 15:51:23.715115 kubelet[2769]: E1105 15:51:23.715004 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:23.715318 kubelet[2769]: W1105 15:51:23.715248 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:23.716244 kubelet[2769]: E1105 15:51:23.715407 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:23.726829 kubelet[2769]: E1105 15:51:23.726588 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:23.730910 containerd[1598]: time="2025-11-05T15:51:23.730869220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2jkz,Uid:6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:23.759874 containerd[1598]: time="2025-11-05T15:51:23.759798894Z" level=info msg="connecting to shim 16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9" address="unix:///run/containerd/s/de0bcf4b1f71edc3e2536f32d96a011a022611bcc9449d7da8d2377faba737e3" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:23.806863 systemd[1]: Started cri-containerd-16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9.scope - libcontainer container 16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9. Nov 5 15:51:23.849962 containerd[1598]: time="2025-11-05T15:51:23.849915834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f86f86f45-7fltc,Uid:7375cb7a-8de7-46cf-8de3-87f10b8decd4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9\"" Nov 5 15:51:23.851542 kubelet[2769]: E1105 15:51:23.850768 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:23.857758 containerd[1598]: time="2025-11-05T15:51:23.857659600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:51:23.921406 containerd[1598]: time="2025-11-05T15:51:23.921076619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2jkz,Uid:6b6b15ed-d69c-4c57-a1f3-c01e2a1405cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\"" Nov 5 15:51:23.923393 kubelet[2769]: E1105 15:51:23.923253 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:24.884942 kubelet[2769]: E1105 15:51:24.884757 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:25.626658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403206088.mount: Deactivated successfully. Nov 5 15:51:26.809903 containerd[1598]: time="2025-11-05T15:51:26.809063502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:26.809903 containerd[1598]: time="2025-11-05T15:51:26.809831667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:51:26.810632 containerd[1598]: time="2025-11-05T15:51:26.810587656Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:26.861710 containerd[1598]: time="2025-11-05T15:51:26.861611608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:26.863396 containerd[1598]: time="2025-11-05T15:51:26.863342582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.005581882s" Nov 5 15:51:26.863577 containerd[1598]: time="2025-11-05T15:51:26.863405176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:51:26.865150 containerd[1598]: time="2025-11-05T15:51:26.865091241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:51:26.884593 kubelet[2769]: E1105 15:51:26.883948 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:26.892967 containerd[1598]: time="2025-11-05T15:51:26.892899712Z" level=info msg="CreateContainer within sandbox \"b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:51:26.904932 containerd[1598]: time="2025-11-05T15:51:26.904872613Z" level=info msg="Container 9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:26.918249 containerd[1598]: time="2025-11-05T15:51:26.918079160Z" level=info msg="CreateContainer within sandbox \"b0d6db380bfec50880eb224ae755fbc883951b2d665c02060b1bbd15dd2233a9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0\"" Nov 5 15:51:26.919109 containerd[1598]: time="2025-11-05T15:51:26.919003471Z" level=info msg="StartContainer for \"9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0\"" Nov 5 15:51:26.922358 containerd[1598]: time="2025-11-05T15:51:26.922300732Z" level=info msg="connecting to shim 9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0" address="unix:///run/containerd/s/bf131044274180bfb9f4b6a7dac987c5f1dc3412934daad614f44734c9bbc424" protocol=ttrpc version=3 Nov 5 15:51:26.961959 systemd[1]: Started cri-containerd-9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0.scope - libcontainer container 9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0. Nov 5 15:51:27.058924 containerd[1598]: time="2025-11-05T15:51:27.058876736Z" level=info msg="StartContainer for \"9c60696ea7705c47e080d9095045ed50d10c936e38bd537c6d7b43bac6dceac0\" returns successfully" Nov 5 15:51:28.044218 kubelet[2769]: E1105 15:51:28.044117 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:28.083018 kubelet[2769]: I1105 15:51:28.082786 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6f86f86f45-7fltc" podStartSLOduration=2.073913239 podStartE2EDuration="5.082759467s" podCreationTimestamp="2025-11-05 15:51:23 +0000 UTC" firstStartedPulling="2025-11-05 15:51:23.855712781 +0000 UTC m=+24.139019377" lastFinishedPulling="2025-11-05 15:51:26.864559004 +0000 UTC m=+27.147865605" observedRunningTime="2025-11-05 15:51:28.065570993 +0000 UTC m=+28.348877592" watchObservedRunningTime="2025-11-05 15:51:28.082759467 +0000 UTC m=+28.366066063" Nov 5 15:51:28.137493 kubelet[2769]: E1105 15:51:28.137361 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.137493 kubelet[2769]: W1105 15:51:28.137390 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.137493 kubelet[2769]: E1105 15:51:28.137424 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.138120 kubelet[2769]: E1105 15:51:28.138084 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.138285 kubelet[2769]: W1105 15:51:28.138100 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.138285 kubelet[2769]: E1105 15:51:28.138227 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.138621 kubelet[2769]: E1105 15:51:28.138590 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.138806 kubelet[2769]: W1105 15:51:28.138676 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.138806 kubelet[2769]: E1105 15:51:28.138690 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.139200 kubelet[2769]: E1105 15:51:28.139134 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.139200 kubelet[2769]: W1105 15:51:28.139150 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.139200 kubelet[2769]: E1105 15:51:28.139160 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.139588 kubelet[2769]: E1105 15:51:28.139525 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.139588 kubelet[2769]: W1105 15:51:28.139535 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.139588 kubelet[2769]: E1105 15:51:28.139544 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.139989 kubelet[2769]: E1105 15:51:28.139926 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.139989 kubelet[2769]: W1105 15:51:28.139942 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.139989 kubelet[2769]: E1105 15:51:28.139957 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.140520 kubelet[2769]: E1105 15:51:28.140457 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.140520 kubelet[2769]: W1105 15:51:28.140468 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.140520 kubelet[2769]: E1105 15:51:28.140479 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.140871 kubelet[2769]: E1105 15:51:28.140849 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.140921 kubelet[2769]: W1105 15:51:28.140872 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.140921 kubelet[2769]: E1105 15:51:28.140892 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.141104 kubelet[2769]: E1105 15:51:28.141088 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.141104 kubelet[2769]: W1105 15:51:28.141100 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.141165 kubelet[2769]: E1105 15:51:28.141110 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.141286 kubelet[2769]: E1105 15:51:28.141268 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.141317 kubelet[2769]: W1105 15:51:28.141289 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.141317 kubelet[2769]: E1105 15:51:28.141300 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.141490 kubelet[2769]: E1105 15:51:28.141477 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.141490 kubelet[2769]: W1105 15:51:28.141489 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.141546 kubelet[2769]: E1105 15:51:28.141497 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.141855 kubelet[2769]: E1105 15:51:28.141840 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.141896 kubelet[2769]: W1105 15:51:28.141856 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.141896 kubelet[2769]: E1105 15:51:28.141870 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.142425 kubelet[2769]: E1105 15:51:28.142394 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.142425 kubelet[2769]: W1105 15:51:28.142414 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.142425 kubelet[2769]: E1105 15:51:28.142426 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.143070 kubelet[2769]: E1105 15:51:28.143040 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.143070 kubelet[2769]: W1105 15:51:28.143055 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.143070 kubelet[2769]: E1105 15:51:28.143068 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.143771 kubelet[2769]: E1105 15:51:28.143748 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.143771 kubelet[2769]: W1105 15:51:28.143765 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.143932 kubelet[2769]: E1105 15:51:28.143776 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.144570 kubelet[2769]: E1105 15:51:28.144523 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.144570 kubelet[2769]: W1105 15:51:28.144565 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.144570 kubelet[2769]: E1105 15:51:28.144576 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.145012 kubelet[2769]: E1105 15:51:28.144775 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.145012 kubelet[2769]: W1105 15:51:28.144783 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.145012 kubelet[2769]: E1105 15:51:28.144792 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.145244 kubelet[2769]: E1105 15:51:28.145222 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.145334 kubelet[2769]: W1105 15:51:28.145318 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.145511 kubelet[2769]: E1105 15:51:28.145397 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.145660 kubelet[2769]: E1105 15:51:28.145623 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.145723 kubelet[2769]: W1105 15:51:28.145713 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.145773 kubelet[2769]: E1105 15:51:28.145764 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.145979 kubelet[2769]: E1105 15:51:28.145970 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.146029 kubelet[2769]: W1105 15:51:28.146022 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.146075 kubelet[2769]: E1105 15:51:28.146067 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.146302 kubelet[2769]: E1105 15:51:28.146292 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.146363 kubelet[2769]: W1105 15:51:28.146355 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.146415 kubelet[2769]: E1105 15:51:28.146407 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.146817 kubelet[2769]: E1105 15:51:28.146691 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.146817 kubelet[2769]: W1105 15:51:28.146703 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.146817 kubelet[2769]: E1105 15:51:28.146713 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.146984 kubelet[2769]: E1105 15:51:28.146974 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.147033 kubelet[2769]: W1105 15:51:28.147025 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.147081 kubelet[2769]: E1105 15:51:28.147073 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.147511 kubelet[2769]: E1105 15:51:28.147382 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.147511 kubelet[2769]: W1105 15:51:28.147400 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.147511 kubelet[2769]: E1105 15:51:28.147413 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.147696 kubelet[2769]: E1105 15:51:28.147581 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.147696 kubelet[2769]: W1105 15:51:28.147592 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.147696 kubelet[2769]: E1105 15:51:28.147601 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.147814 kubelet[2769]: E1105 15:51:28.147760 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.147814 kubelet[2769]: W1105 15:51:28.147767 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.147814 kubelet[2769]: E1105 15:51:28.147774 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.148038 kubelet[2769]: E1105 15:51:28.147972 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.148038 kubelet[2769]: W1105 15:51:28.147978 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.148038 kubelet[2769]: E1105 15:51:28.147986 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.148494 kubelet[2769]: E1105 15:51:28.148473 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.148494 kubelet[2769]: W1105 15:51:28.148489 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.148592 kubelet[2769]: E1105 15:51:28.148501 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.148818 kubelet[2769]: E1105 15:51:28.148797 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.148818 kubelet[2769]: W1105 15:51:28.148814 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.148928 kubelet[2769]: E1105 15:51:28.148826 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.149028 kubelet[2769]: E1105 15:51:28.149014 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.149028 kubelet[2769]: W1105 15:51:28.149025 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.149212 kubelet[2769]: E1105 15:51:28.149034 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.149212 kubelet[2769]: E1105 15:51:28.149191 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.149212 kubelet[2769]: W1105 15:51:28.149199 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.149212 kubelet[2769]: E1105 15:51:28.149206 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.149386 kubelet[2769]: E1105 15:51:28.149352 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.149386 kubelet[2769]: W1105 15:51:28.149359 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.149386 kubelet[2769]: E1105 15:51:28.149366 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.149802 kubelet[2769]: E1105 15:51:28.149780 2769 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:51:28.149802 kubelet[2769]: W1105 15:51:28.149797 2769 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:51:28.149922 kubelet[2769]: E1105 15:51:28.149810 2769 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:51:28.434663 containerd[1598]: time="2025-11-05T15:51:28.432190910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:28.434663 containerd[1598]: time="2025-11-05T15:51:28.434022143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:51:28.436163 containerd[1598]: time="2025-11-05T15:51:28.435337589Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:28.439750 containerd[1598]: time="2025-11-05T15:51:28.439686989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:28.441746 containerd[1598]: time="2025-11-05T15:51:28.441680649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.576443567s" Nov 5 15:51:28.441746 containerd[1598]: time="2025-11-05T15:51:28.441741735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:51:28.450186 containerd[1598]: time="2025-11-05T15:51:28.450071502Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:51:28.466600 containerd[1598]: time="2025-11-05T15:51:28.466512319Z" level=info msg="Container e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:28.480053 containerd[1598]: time="2025-11-05T15:51:28.479993130Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\"" Nov 5 15:51:28.481242 containerd[1598]: time="2025-11-05T15:51:28.481211478Z" level=info msg="StartContainer for \"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\"" Nov 5 15:51:28.484124 containerd[1598]: time="2025-11-05T15:51:28.484067505Z" level=info msg="connecting to shim e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40" address="unix:///run/containerd/s/de0bcf4b1f71edc3e2536f32d96a011a022611bcc9449d7da8d2377faba737e3" protocol=ttrpc version=3 Nov 5 15:51:28.519935 systemd[1]: Started cri-containerd-e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40.scope - libcontainer container e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40. Nov 5 15:51:28.590388 containerd[1598]: time="2025-11-05T15:51:28.590081403Z" level=info msg="StartContainer for \"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\" returns successfully" Nov 5 15:51:28.610076 systemd[1]: cri-containerd-e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40.scope: Deactivated successfully. Nov 5 15:51:28.611152 systemd[1]: cri-containerd-e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40.scope: Consumed 43ms CPU time, 6.1M memory peak, 4.6M written to disk. Nov 5 15:51:28.624705 containerd[1598]: time="2025-11-05T15:51:28.624242682Z" level=info msg="received exit event container_id:\"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\" id:\"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\" pid:3427 exited_at:{seconds:1762357888 nanos:614447346}" Nov 5 15:51:28.643821 containerd[1598]: time="2025-11-05T15:51:28.643750270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\" id:\"e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40\" pid:3427 exited_at:{seconds:1762357888 nanos:614447346}" Nov 5 15:51:28.672041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e36907ff024b91568e1c76d4b4316f724db35c64d0b670a93dce8ad87505fe40-rootfs.mount: Deactivated successfully. Nov 5 15:51:28.883158 kubelet[2769]: E1105 15:51:28.883089 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:29.052690 kubelet[2769]: E1105 15:51:29.052294 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:29.052690 kubelet[2769]: E1105 15:51:29.052410 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:29.055421 containerd[1598]: time="2025-11-05T15:51:29.055361265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:51:30.054503 kubelet[2769]: E1105 15:51:30.054458 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:30.883722 kubelet[2769]: E1105 15:51:30.883621 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:32.883866 kubelet[2769]: E1105 15:51:32.883798 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:33.420571 containerd[1598]: time="2025-11-05T15:51:33.420505636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:33.421794 containerd[1598]: time="2025-11-05T15:51:33.421746593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:51:33.423561 containerd[1598]: time="2025-11-05T15:51:33.423499200Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:33.424837 containerd[1598]: time="2025-11-05T15:51:33.424798144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:33.426184 containerd[1598]: time="2025-11-05T15:51:33.426133083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.370712225s" Nov 5 15:51:33.426184 containerd[1598]: time="2025-11-05T15:51:33.426175117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:51:33.433420 containerd[1598]: time="2025-11-05T15:51:33.433308625Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:51:33.461517 containerd[1598]: time="2025-11-05T15:51:33.461339712Z" level=info msg="Container 6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:33.468068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749123311.mount: Deactivated successfully. Nov 5 15:51:33.489657 containerd[1598]: time="2025-11-05T15:51:33.489505515Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\"" Nov 5 15:51:33.491889 containerd[1598]: time="2025-11-05T15:51:33.490512502Z" level=info msg="StartContainer for \"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\"" Nov 5 15:51:33.494083 containerd[1598]: time="2025-11-05T15:51:33.494007484Z" level=info msg="connecting to shim 6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982" address="unix:///run/containerd/s/de0bcf4b1f71edc3e2536f32d96a011a022611bcc9449d7da8d2377faba737e3" protocol=ttrpc version=3 Nov 5 15:51:33.537751 systemd[1]: Started cri-containerd-6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982.scope - libcontainer container 6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982. Nov 5 15:51:33.604012 containerd[1598]: time="2025-11-05T15:51:33.603950170Z" level=info msg="StartContainer for \"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\" returns successfully" Nov 5 15:51:34.086154 kubelet[2769]: E1105 15:51:34.085456 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:34.408777 systemd[1]: cri-containerd-6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982.scope: Deactivated successfully. Nov 5 15:51:34.411588 systemd[1]: cri-containerd-6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982.scope: Consumed 699ms CPU time, 161.6M memory peak, 10.2M read from disk, 171.3M written to disk. Nov 5 15:51:34.531690 containerd[1598]: time="2025-11-05T15:51:34.530924950Z" level=info msg="received exit event container_id:\"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\" id:\"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\" pid:3486 exited_at:{seconds:1762357894 nanos:509310929}" Nov 5 15:51:34.556821 containerd[1598]: time="2025-11-05T15:51:34.556749089Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\" id:\"6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982\" pid:3486 exited_at:{seconds:1762357894 nanos:509310929}" Nov 5 15:51:34.581172 kubelet[2769]: I1105 15:51:34.581138 2769 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:51:34.636076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6534ed29ac4cfbb458e2614b8187f57f14f53c31e3730412c12b8ecea07b3982-rootfs.mount: Deactivated successfully. Nov 5 15:51:34.676832 systemd[1]: Created slice kubepods-burstable-pod899384b2_cd0c_4539_b89b_fa912eceabb8.slice - libcontainer container kubepods-burstable-pod899384b2_cd0c_4539_b89b_fa912eceabb8.slice. Nov 5 15:51:34.703063 kubelet[2769]: I1105 15:51:34.702603 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d0421556-4619-489b-96b6-556923804205-goldmane-key-pair\") pod \"goldmane-666569f655-84dz5\" (UID: \"d0421556-4619-489b-96b6-556923804205\") " pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:34.703549 kubelet[2769]: I1105 15:51:34.703391 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt4mx\" (UniqueName: \"kubernetes.io/projected/165fdd14-70f6-41d7-a608-5c88252d2d07-kube-api-access-lt4mx\") pod \"calico-apiserver-69cd9bb6f5-8xkrl\" (UID: \"165fdd14-70f6-41d7-a608-5c88252d2d07\") " pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" Nov 5 15:51:34.703549 kubelet[2769]: I1105 15:51:34.703442 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2171a61d-ffd4-4f1c-8106-ddf8826eef75-calico-apiserver-certs\") pod \"calico-apiserver-69cd9bb6f5-kptv5\" (UID: \"2171a61d-ffd4-4f1c-8106-ddf8826eef75\") " pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" Nov 5 15:51:34.703549 kubelet[2769]: I1105 15:51:34.703494 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0421556-4619-489b-96b6-556923804205-config\") pod \"goldmane-666569f655-84dz5\" (UID: \"d0421556-4619-489b-96b6-556923804205\") " pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:34.704070 kubelet[2769]: I1105 15:51:34.703835 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2415f61-2034-4efd-a40a-a585aaa31215-config-volume\") pod \"coredns-674b8bbfcf-fwh4z\" (UID: \"d2415f61-2034-4efd-a40a-a585aaa31215\") " pod="kube-system/coredns-674b8bbfcf-fwh4z" Nov 5 15:51:34.704070 kubelet[2769]: I1105 15:51:34.703888 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0421556-4619-489b-96b6-556923804205-goldmane-ca-bundle\") pod \"goldmane-666569f655-84dz5\" (UID: \"d0421556-4619-489b-96b6-556923804205\") " pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:34.704070 kubelet[2769]: I1105 15:51:34.703938 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/165fdd14-70f6-41d7-a608-5c88252d2d07-calico-apiserver-certs\") pod \"calico-apiserver-69cd9bb6f5-8xkrl\" (UID: \"165fdd14-70f6-41d7-a608-5c88252d2d07\") " pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" Nov 5 15:51:34.704070 kubelet[2769]: I1105 15:51:34.703967 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmlq\" (UniqueName: \"kubernetes.io/projected/d0421556-4619-489b-96b6-556923804205-kube-api-access-gmmlq\") pod \"goldmane-666569f655-84dz5\" (UID: \"d0421556-4619-489b-96b6-556923804205\") " pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:34.704070 kubelet[2769]: I1105 15:51:34.704024 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84vfr\" (UniqueName: \"kubernetes.io/projected/899384b2-cd0c-4539-b89b-fa912eceabb8-kube-api-access-84vfr\") pod \"coredns-674b8bbfcf-t2gdt\" (UID: \"899384b2-cd0c-4539-b89b-fa912eceabb8\") " pod="kube-system/coredns-674b8bbfcf-t2gdt" Nov 5 15:51:34.705007 kubelet[2769]: I1105 15:51:34.704358 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddjw\" (UniqueName: \"kubernetes.io/projected/d2415f61-2034-4efd-a40a-a585aaa31215-kube-api-access-9ddjw\") pod \"coredns-674b8bbfcf-fwh4z\" (UID: \"d2415f61-2034-4efd-a40a-a585aaa31215\") " pod="kube-system/coredns-674b8bbfcf-fwh4z" Nov 5 15:51:34.705007 kubelet[2769]: I1105 15:51:34.704465 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/899384b2-cd0c-4539-b89b-fa912eceabb8-config-volume\") pod \"coredns-674b8bbfcf-t2gdt\" (UID: \"899384b2-cd0c-4539-b89b-fa912eceabb8\") " pod="kube-system/coredns-674b8bbfcf-t2gdt" Nov 5 15:51:34.705007 kubelet[2769]: I1105 15:51:34.704500 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lg2t\" (UniqueName: \"kubernetes.io/projected/2171a61d-ffd4-4f1c-8106-ddf8826eef75-kube-api-access-2lg2t\") pod \"calico-apiserver-69cd9bb6f5-kptv5\" (UID: \"2171a61d-ffd4-4f1c-8106-ddf8826eef75\") " pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" Nov 5 15:51:34.706367 systemd[1]: Created slice kubepods-besteffort-pod165fdd14_70f6_41d7_a608_5c88252d2d07.slice - libcontainer container kubepods-besteffort-pod165fdd14_70f6_41d7_a608_5c88252d2d07.slice. Nov 5 15:51:34.718877 systemd[1]: Created slice kubepods-besteffort-podd0421556_4619_489b_96b6_556923804205.slice - libcontainer container kubepods-besteffort-podd0421556_4619_489b_96b6_556923804205.slice. Nov 5 15:51:34.732956 systemd[1]: Created slice kubepods-besteffort-pod2171a61d_ffd4_4f1c_8106_ddf8826eef75.slice - libcontainer container kubepods-besteffort-pod2171a61d_ffd4_4f1c_8106_ddf8826eef75.slice. Nov 5 15:51:34.746354 systemd[1]: Created slice kubepods-burstable-podd2415f61_2034_4efd_a40a_a585aaa31215.slice - libcontainer container kubepods-burstable-podd2415f61_2034_4efd_a40a_a585aaa31215.slice. Nov 5 15:51:34.756836 systemd[1]: Created slice kubepods-besteffort-pod1e405a49_8153_4577_b190_3b34d7fc5814.slice - libcontainer container kubepods-besteffort-pod1e405a49_8153_4577_b190_3b34d7fc5814.slice. Nov 5 15:51:34.767504 systemd[1]: Created slice kubepods-besteffort-podfcbbdd80_a45a_4836_a3fd_4187225980a7.slice - libcontainer container kubepods-besteffort-podfcbbdd80_a45a_4836_a3fd_4187225980a7.slice. Nov 5 15:51:34.805213 kubelet[2769]: I1105 15:51:34.805039 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg27v\" (UniqueName: \"kubernetes.io/projected/fcbbdd80-a45a-4836-a3fd-4187225980a7-kube-api-access-dg27v\") pod \"whisker-5694d6cb77-mxk64\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " pod="calico-system/whisker-5694d6cb77-mxk64" Nov 5 15:51:34.805420 kubelet[2769]: I1105 15:51:34.805221 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8hsb\" (UniqueName: \"kubernetes.io/projected/1e405a49-8153-4577-b190-3b34d7fc5814-kube-api-access-z8hsb\") pod \"calico-kube-controllers-59557cc4f4-hjzvn\" (UID: \"1e405a49-8153-4577-b190-3b34d7fc5814\") " pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" Nov 5 15:51:34.805420 kubelet[2769]: I1105 15:51:34.805355 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e405a49-8153-4577-b190-3b34d7fc5814-tigera-ca-bundle\") pod \"calico-kube-controllers-59557cc4f4-hjzvn\" (UID: \"1e405a49-8153-4577-b190-3b34d7fc5814\") " pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" Nov 5 15:51:34.807108 kubelet[2769]: I1105 15:51:34.805517 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-backend-key-pair\") pod \"whisker-5694d6cb77-mxk64\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " pod="calico-system/whisker-5694d6cb77-mxk64" Nov 5 15:51:34.807108 kubelet[2769]: I1105 15:51:34.805543 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-ca-bundle\") pod \"whisker-5694d6cb77-mxk64\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " pod="calico-system/whisker-5694d6cb77-mxk64" Nov 5 15:51:34.895198 systemd[1]: Created slice kubepods-besteffort-podbe0a8e42_97b5_40e7_95d6_3baf83ea6dbb.slice - libcontainer container kubepods-besteffort-podbe0a8e42_97b5_40e7_95d6_3baf83ea6dbb.slice. Nov 5 15:51:34.929111 containerd[1598]: time="2025-11-05T15:51:34.928955465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69q97,Uid:be0a8e42-97b5-40e7-95d6-3baf83ea6dbb,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:34.990505 kubelet[2769]: E1105 15:51:34.990444 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:34.994357 containerd[1598]: time="2025-11-05T15:51:34.994271381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2gdt,Uid:899384b2-cd0c-4539-b89b-fa912eceabb8,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:35.020342 containerd[1598]: time="2025-11-05T15:51:35.020076103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-8xkrl,Uid:165fdd14-70f6-41d7-a608-5c88252d2d07,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:51:35.027502 containerd[1598]: time="2025-11-05T15:51:35.027283951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-84dz5,Uid:d0421556-4619-489b-96b6-556923804205,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:35.049695 containerd[1598]: time="2025-11-05T15:51:35.048695561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-kptv5,Uid:2171a61d-ffd4-4f1c-8106-ddf8826eef75,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:51:35.052997 kubelet[2769]: E1105 15:51:35.052937 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:35.055731 containerd[1598]: time="2025-11-05T15:51:35.055511541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwh4z,Uid:d2415f61-2034-4efd-a40a-a585aaa31215,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:35.066904 containerd[1598]: time="2025-11-05T15:51:35.066846934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59557cc4f4-hjzvn,Uid:1e405a49-8153-4577-b190-3b34d7fc5814,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:35.078083 containerd[1598]: time="2025-11-05T15:51:35.078032225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5694d6cb77-mxk64,Uid:fcbbdd80-a45a-4836-a3fd-4187225980a7,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:35.123418 kubelet[2769]: E1105 15:51:35.123377 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:35.153572 containerd[1598]: time="2025-11-05T15:51:35.153324245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:51:35.411055 containerd[1598]: time="2025-11-05T15:51:35.410966975Z" level=error msg="Failed to destroy network for sandbox \"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.431800 containerd[1598]: time="2025-11-05T15:51:35.419695631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-84dz5,Uid:d0421556-4619-489b-96b6-556923804205,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.432233 containerd[1598]: time="2025-11-05T15:51:35.427753815Z" level=error msg="Failed to destroy network for sandbox \"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.438020 containerd[1598]: time="2025-11-05T15:51:35.437955721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-kptv5,Uid:2171a61d-ffd4-4f1c-8106-ddf8826eef75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.447187 containerd[1598]: time="2025-11-05T15:51:35.447136006Z" level=error msg="Failed to destroy network for sandbox \"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.450350 kubelet[2769]: E1105 15:51:35.450274 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.451977 kubelet[2769]: E1105 15:51:35.450282 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.451977 kubelet[2769]: E1105 15:51:35.450443 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:35.451977 kubelet[2769]: E1105 15:51:35.450501 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-84dz5" Nov 5 15:51:35.451977 kubelet[2769]: E1105 15:51:35.450371 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" Nov 5 15:51:35.453190 containerd[1598]: time="2025-11-05T15:51:35.450794945Z" level=error msg="Failed to destroy network for sandbox \"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.453237 kubelet[2769]: E1105 15:51:35.450557 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" Nov 5 15:51:35.453237 kubelet[2769]: E1105 15:51:35.450621 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69cd9bb6f5-kptv5_calico-apiserver(2171a61d-ffd4-4f1c-8106-ddf8826eef75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69cd9bb6f5-kptv5_calico-apiserver(2171a61d-ffd4-4f1c-8106-ddf8826eef75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6acf5b7a1325fd91d55a67c760f83366a7a5220ac1f8a92f4d95fe353c74e314\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:51:35.453237 kubelet[2769]: E1105 15:51:35.450940 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-84dz5_calico-system(d0421556-4619-489b-96b6-556923804205)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-84dz5_calico-system(d0421556-4619-489b-96b6-556923804205)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9580175360ec32581fcf3a746204f586aa468cea14458fd88058fd6f75756b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:51:35.455330 containerd[1598]: time="2025-11-05T15:51:35.455103485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-8xkrl,Uid:165fdd14-70f6-41d7-a608-5c88252d2d07,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.455959 kubelet[2769]: E1105 15:51:35.455761 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.455959 kubelet[2769]: E1105 15:51:35.455905 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" Nov 5 15:51:35.456247 kubelet[2769]: E1105 15:51:35.455928 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" Nov 5 15:51:35.456381 kubelet[2769]: E1105 15:51:35.456328 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69cd9bb6f5-8xkrl_calico-apiserver(165fdd14-70f6-41d7-a608-5c88252d2d07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69cd9bb6f5-8xkrl_calico-apiserver(165fdd14-70f6-41d7-a608-5c88252d2d07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a94bc149a80e551c424343a06d44802ff2a047dc7ad52fd5bb71622fece4ced\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:51:35.458663 containerd[1598]: time="2025-11-05T15:51:35.457999201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwh4z,Uid:d2415f61-2034-4efd-a40a-a585aaa31215,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.459347 kubelet[2769]: E1105 15:51:35.458278 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.459347 kubelet[2769]: E1105 15:51:35.458531 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fwh4z" Nov 5 15:51:35.459347 kubelet[2769]: E1105 15:51:35.458566 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fwh4z" Nov 5 15:51:35.459502 kubelet[2769]: E1105 15:51:35.459102 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fwh4z_kube-system(d2415f61-2034-4efd-a40a-a585aaa31215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fwh4z_kube-system(d2415f61-2034-4efd-a40a-a585aaa31215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ea667502d2f7e1460ef426b85442e5e758a99056414dee13efde14c4aa814c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fwh4z" podUID="d2415f61-2034-4efd-a40a-a585aaa31215" Nov 5 15:51:35.479913 containerd[1598]: time="2025-11-05T15:51:35.479674061Z" level=error msg="Failed to destroy network for sandbox \"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.481816 containerd[1598]: time="2025-11-05T15:51:35.480932738Z" level=error msg="Failed to destroy network for sandbox \"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.485690 containerd[1598]: time="2025-11-05T15:51:35.485626928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69q97,Uid:be0a8e42-97b5-40e7-95d6-3baf83ea6dbb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.486159 kubelet[2769]: E1105 15:51:35.486117 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.486239 kubelet[2769]: E1105 15:51:35.486188 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:35.486239 kubelet[2769]: E1105 15:51:35.486213 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-69q97" Nov 5 15:51:35.486300 kubelet[2769]: E1105 15:51:35.486264 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ade87be4b5b1376a7fae4e58851454c63ffb699ab13d03a905a25a15a7da6ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:35.487658 containerd[1598]: time="2025-11-05T15:51:35.487384920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2gdt,Uid:899384b2-cd0c-4539-b89b-fa912eceabb8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.487974 kubelet[2769]: E1105 15:51:35.487878 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.487974 kubelet[2769]: E1105 15:51:35.487964 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2gdt" Nov 5 15:51:35.488050 kubelet[2769]: E1105 15:51:35.487988 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t2gdt" Nov 5 15:51:35.488077 kubelet[2769]: E1105 15:51:35.488045 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t2gdt_kube-system(899384b2-cd0c-4539-b89b-fa912eceabb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t2gdt_kube-system(899384b2-cd0c-4539-b89b-fa912eceabb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"473ee2df9c7a538a16c3e6656c35f1dba0a90e7bc46c1875d1caae96ca7fa717\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t2gdt" podUID="899384b2-cd0c-4539-b89b-fa912eceabb8" Nov 5 15:51:35.489528 containerd[1598]: time="2025-11-05T15:51:35.489420952Z" level=error msg="Failed to destroy network for sandbox \"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.490982 containerd[1598]: time="2025-11-05T15:51:35.490396678Z" level=error msg="Failed to destroy network for sandbox \"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.491666 containerd[1598]: time="2025-11-05T15:51:35.491238696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5694d6cb77-mxk64,Uid:fcbbdd80-a45a-4836-a3fd-4187225980a7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.491810 kubelet[2769]: E1105 15:51:35.491424 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.491810 kubelet[2769]: E1105 15:51:35.491483 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5694d6cb77-mxk64" Nov 5 15:51:35.491810 kubelet[2769]: E1105 15:51:35.491513 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5694d6cb77-mxk64" Nov 5 15:51:35.491951 kubelet[2769]: E1105 15:51:35.491596 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5694d6cb77-mxk64_calico-system(fcbbdd80-a45a-4836-a3fd-4187225980a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5694d6cb77-mxk64_calico-system(fcbbdd80-a45a-4836-a3fd-4187225980a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"663be7df41e1f4cf0f95b0e520bfbb533ce629b4cc14976f2fcd8cbfc7d4e5ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5694d6cb77-mxk64" podUID="fcbbdd80-a45a-4836-a3fd-4187225980a7" Nov 5 15:51:35.493144 containerd[1598]: time="2025-11-05T15:51:35.493057886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59557cc4f4-hjzvn,Uid:1e405a49-8153-4577-b190-3b34d7fc5814,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.493522 kubelet[2769]: E1105 15:51:35.493488 2769 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:51:35.493613 kubelet[2769]: E1105 15:51:35.493544 2769 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" Nov 5 15:51:35.493613 kubelet[2769]: E1105 15:51:35.493566 2769 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" Nov 5 15:51:35.494213 kubelet[2769]: E1105 15:51:35.493628 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59557cc4f4-hjzvn_calico-system(1e405a49-8153-4577-b190-3b34d7fc5814)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59557cc4f4-hjzvn_calico-system(1e405a49-8153-4577-b190-3b34d7fc5814)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcbc81a6ad3ef5497d07d3c23065a53b25f192607617136ccd8d315a4bd07dda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:51:43.452941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726124220.mount: Deactivated successfully. Nov 5 15:51:43.519350 containerd[1598]: time="2025-11-05T15:51:43.505021478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:43.524224 containerd[1598]: time="2025-11-05T15:51:43.524162146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:51:43.524748 containerd[1598]: time="2025-11-05T15:51:43.524721367Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:43.526235 containerd[1598]: time="2025-11-05T15:51:43.525447328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:43.528904 containerd[1598]: time="2025-11-05T15:51:43.528862753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.36467304s" Nov 5 15:51:43.529033 containerd[1598]: time="2025-11-05T15:51:43.529019315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:51:43.559120 containerd[1598]: time="2025-11-05T15:51:43.558428458Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:51:43.595676 containerd[1598]: time="2025-11-05T15:51:43.591415789Z" level=info msg="Container e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:43.593596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315736233.mount: Deactivated successfully. Nov 5 15:51:43.646660 containerd[1598]: time="2025-11-05T15:51:43.646542216Z" level=info msg="CreateContainer within sandbox \"16682a307fb2d2f1f7d20e7b7c18e796acd1ce5a4b3b75e03356f35ee08873e9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\"" Nov 5 15:51:43.647964 containerd[1598]: time="2025-11-05T15:51:43.647932943Z" level=info msg="StartContainer for \"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\"" Nov 5 15:51:43.659050 containerd[1598]: time="2025-11-05T15:51:43.658924034Z" level=info msg="connecting to shim e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac" address="unix:///run/containerd/s/de0bcf4b1f71edc3e2536f32d96a011a022611bcc9449d7da8d2377faba737e3" protocol=ttrpc version=3 Nov 5 15:51:43.777981 systemd[1]: Started cri-containerd-e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac.scope - libcontainer container e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac. Nov 5 15:51:43.888977 containerd[1598]: time="2025-11-05T15:51:43.888930884Z" level=info msg="StartContainer for \"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\" returns successfully" Nov 5 15:51:44.040623 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:51:44.041507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:51:44.162620 kubelet[2769]: E1105 15:51:44.162132 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:44.281253 kubelet[2769]: I1105 15:51:44.279090 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q2jkz" podStartSLOduration=1.674267385 podStartE2EDuration="21.27907s" podCreationTimestamp="2025-11-05 15:51:23 +0000 UTC" firstStartedPulling="2025-11-05 15:51:23.925099259 +0000 UTC m=+24.208405853" lastFinishedPulling="2025-11-05 15:51:43.529901898 +0000 UTC m=+43.813208468" observedRunningTime="2025-11-05 15:51:44.202205169 +0000 UTC m=+44.485511765" watchObservedRunningTime="2025-11-05 15:51:44.27907 +0000 UTC m=+44.562376596" Nov 5 15:51:44.379674 kubelet[2769]: I1105 15:51:44.379595 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-backend-key-pair\") pod \"fcbbdd80-a45a-4836-a3fd-4187225980a7\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " Nov 5 15:51:44.379892 kubelet[2769]: I1105 15:51:44.379701 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-ca-bundle\") pod \"fcbbdd80-a45a-4836-a3fd-4187225980a7\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " Nov 5 15:51:44.380083 kubelet[2769]: I1105 15:51:44.380061 2769 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg27v\" (UniqueName: \"kubernetes.io/projected/fcbbdd80-a45a-4836-a3fd-4187225980a7-kube-api-access-dg27v\") pod \"fcbbdd80-a45a-4836-a3fd-4187225980a7\" (UID: \"fcbbdd80-a45a-4836-a3fd-4187225980a7\") " Nov 5 15:51:44.394080 kubelet[2769]: I1105 15:51:44.393620 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcbbdd80-a45a-4836-a3fd-4187225980a7-kube-api-access-dg27v" (OuterVolumeSpecName: "kube-api-access-dg27v") pod "fcbbdd80-a45a-4836-a3fd-4187225980a7" (UID: "fcbbdd80-a45a-4836-a3fd-4187225980a7"). InnerVolumeSpecName "kube-api-access-dg27v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:51:44.394080 kubelet[2769]: I1105 15:51:44.393980 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fcbbdd80-a45a-4836-a3fd-4187225980a7" (UID: "fcbbdd80-a45a-4836-a3fd-4187225980a7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:51:44.397658 kubelet[2769]: I1105 15:51:44.397401 2769 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fcbbdd80-a45a-4836-a3fd-4187225980a7" (UID: "fcbbdd80-a45a-4836-a3fd-4187225980a7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:51:44.455413 systemd[1]: var-lib-kubelet-pods-fcbbdd80\x2da45a\x2d4836\x2da3fd\x2d4187225980a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddg27v.mount: Deactivated successfully. Nov 5 15:51:44.455581 systemd[1]: var-lib-kubelet-pods-fcbbdd80\x2da45a\x2d4836\x2da3fd\x2d4187225980a7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:51:44.481327 kubelet[2769]: I1105 15:51:44.481264 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-backend-key-pair\") on node \"ci-4487.0.1-6-a291033793\" DevicePath \"\"" Nov 5 15:51:44.481327 kubelet[2769]: I1105 15:51:44.481308 2769 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbbdd80-a45a-4836-a3fd-4187225980a7-whisker-ca-bundle\") on node \"ci-4487.0.1-6-a291033793\" DevicePath \"\"" Nov 5 15:51:44.481327 kubelet[2769]: I1105 15:51:44.481319 2769 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dg27v\" (UniqueName: \"kubernetes.io/projected/fcbbdd80-a45a-4836-a3fd-4187225980a7-kube-api-access-dg27v\") on node \"ci-4487.0.1-6-a291033793\" DevicePath \"\"" Nov 5 15:51:45.163705 kubelet[2769]: I1105 15:51:45.163169 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:51:45.164255 kubelet[2769]: E1105 15:51:45.164237 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:45.170703 systemd[1]: Removed slice kubepods-besteffort-podfcbbdd80_a45a_4836_a3fd_4187225980a7.slice - libcontainer container kubepods-besteffort-podfcbbdd80_a45a_4836_a3fd_4187225980a7.slice. Nov 5 15:51:45.272765 systemd[1]: Created slice kubepods-besteffort-pod03cfcdc6_c1a2_47b4_849d_d54b33232d4e.slice - libcontainer container kubepods-besteffort-pod03cfcdc6_c1a2_47b4_849d_d54b33232d4e.slice. Nov 5 15:51:45.287800 kubelet[2769]: I1105 15:51:45.286433 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/03cfcdc6-c1a2-47b4-849d-d54b33232d4e-whisker-backend-key-pair\") pod \"whisker-67b9b594f7-vkfhk\" (UID: \"03cfcdc6-c1a2-47b4-849d-d54b33232d4e\") " pod="calico-system/whisker-67b9b594f7-vkfhk" Nov 5 15:51:45.287800 kubelet[2769]: I1105 15:51:45.287721 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/03cfcdc6-c1a2-47b4-849d-d54b33232d4e-whisker-ca-bundle\") pod \"whisker-67b9b594f7-vkfhk\" (UID: \"03cfcdc6-c1a2-47b4-849d-d54b33232d4e\") " pod="calico-system/whisker-67b9b594f7-vkfhk" Nov 5 15:51:45.287800 kubelet[2769]: I1105 15:51:45.287767 2769 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csh7\" (UniqueName: \"kubernetes.io/projected/03cfcdc6-c1a2-47b4-849d-d54b33232d4e-kube-api-access-2csh7\") pod \"whisker-67b9b594f7-vkfhk\" (UID: \"03cfcdc6-c1a2-47b4-849d-d54b33232d4e\") " pod="calico-system/whisker-67b9b594f7-vkfhk" Nov 5 15:51:45.580036 containerd[1598]: time="2025-11-05T15:51:45.579975949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b9b594f7-vkfhk,Uid:03cfcdc6-c1a2-47b4-849d-d54b33232d4e,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:45.895507 kubelet[2769]: I1105 15:51:45.894500 2769 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcbbdd80-a45a-4836-a3fd-4187225980a7" path="/var/lib/kubelet/pods/fcbbdd80-a45a-4836-a3fd-4187225980a7/volumes" Nov 5 15:51:45.897163 containerd[1598]: time="2025-11-05T15:51:45.897054991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-84dz5,Uid:d0421556-4619-489b-96b6-556923804205,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:46.121821 systemd-networkd[1496]: calia91ef472ce7: Link UP Nov 5 15:51:46.125407 systemd-networkd[1496]: calia91ef472ce7: Gained carrier Nov 5 15:51:46.175449 containerd[1598]: 2025-11-05 15:51:45.633 [INFO][3812] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:51:46.175449 containerd[1598]: 2025-11-05 15:51:45.663 [INFO][3812] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0 whisker-67b9b594f7- calico-system 03cfcdc6-c1a2-47b4-849d-d54b33232d4e 933 0 2025-11-05 15:51:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67b9b594f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 whisker-67b9b594f7-vkfhk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia91ef472ce7 [] [] }} ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-" Nov 5 15:51:46.175449 containerd[1598]: 2025-11-05 15:51:45.663 [INFO][3812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.175449 containerd[1598]: 2025-11-05 15:51:45.938 [INFO][3848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" HandleID="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Workload="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.943 [INFO][3848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" HandleID="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Workload="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-6-a291033793", "pod":"whisker-67b9b594f7-vkfhk", "timestamp":"2025-11-05 15:51:45.937986933 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.943 [INFO][3848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.944 [INFO][3848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.947 [INFO][3848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.978 [INFO][3848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:45.993 [INFO][3848] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:46.010 [INFO][3848] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:46.015 [INFO][3848] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.175767 containerd[1598]: 2025-11-05 15:51:46.019 [INFO][3848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.019 [INFO][3848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.023 [INFO][3848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.033 [INFO][3848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.041 [INFO][3848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.1/26] block=192.168.101.0/26 handle="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.041 [INFO][3848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.1/26] handle="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.041 [INFO][3848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:46.176002 containerd[1598]: 2025-11-05 15:51:46.042 [INFO][3848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.1/26] IPv6=[] ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" HandleID="k8s-pod-network.52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Workload="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.176154 containerd[1598]: 2025-11-05 15:51:46.049 [INFO][3812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0", GenerateName:"whisker-67b9b594f7-", Namespace:"calico-system", SelfLink:"", UID:"03cfcdc6-c1a2-47b4-849d-d54b33232d4e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b9b594f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"whisker-67b9b594f7-vkfhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia91ef472ce7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:46.176154 containerd[1598]: 2025-11-05 15:51:46.051 [INFO][3812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.1/32] ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.176252 containerd[1598]: 2025-11-05 15:51:46.051 [INFO][3812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia91ef472ce7 ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.176252 containerd[1598]: 2025-11-05 15:51:46.124 [INFO][3812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.176295 containerd[1598]: 2025-11-05 15:51:46.128 [INFO][3812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0", GenerateName:"whisker-67b9b594f7-", Namespace:"calico-system", SelfLink:"", UID:"03cfcdc6-c1a2-47b4-849d-d54b33232d4e", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67b9b594f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c", Pod:"whisker-67b9b594f7-vkfhk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.101.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia91ef472ce7", MAC:"fe:81:f0:ed:39:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:46.176351 containerd[1598]: 2025-11-05 15:51:46.168 [INFO][3812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" Namespace="calico-system" Pod="whisker-67b9b594f7-vkfhk" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-whisker--67b9b594f7--vkfhk-eth0" Nov 5 15:51:46.341940 systemd-networkd[1496]: calia2f362b42fd: Link UP Nov 5 15:51:46.344006 systemd-networkd[1496]: calia2f362b42fd: Gained carrier Nov 5 15:51:46.456089 containerd[1598]: 2025-11-05 15:51:45.996 [INFO][3909] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:51:46.456089 containerd[1598]: 2025-11-05 15:51:46.017 [INFO][3909] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0 goldmane-666569f655- calico-system d0421556-4619-489b-96b6-556923804205 857 0 2025-11-05 15:51:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 goldmane-666569f655-84dz5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia2f362b42fd [] [] }} ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-" Nov 5 15:51:46.456089 containerd[1598]: 2025-11-05 15:51:46.018 [INFO][3909] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.456089 containerd[1598]: 2025-11-05 15:51:46.086 [INFO][3928] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" HandleID="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Workload="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.086 [INFO][3928] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" HandleID="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Workload="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-6-a291033793", "pod":"goldmane-666569f655-84dz5", "timestamp":"2025-11-05 15:51:46.086087978 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.086 [INFO][3928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.086 [INFO][3928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.086 [INFO][3928] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.142 [INFO][3928] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.189 [INFO][3928] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.241 [INFO][3928] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.269 [INFO][3928] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456469 containerd[1598]: 2025-11-05 15:51:46.280 [INFO][3928] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.280 [INFO][3928] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.297 [INFO][3928] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314 Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.309 [INFO][3928] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.321 [INFO][3928] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.2/26] block=192.168.101.0/26 handle="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.322 [INFO][3928] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.2/26] handle="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.322 [INFO][3928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:46.456771 containerd[1598]: 2025-11-05 15:51:46.322 [INFO][3928] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.2/26] IPv6=[] ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" HandleID="k8s-pod-network.5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Workload="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.456997 containerd[1598]: 2025-11-05 15:51:46.330 [INFO][3909] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d0421556-4619-489b-96b6-556923804205", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"goldmane-666569f655-84dz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2f362b42fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:46.457080 containerd[1598]: 2025-11-05 15:51:46.332 [INFO][3909] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.2/32] ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.457080 containerd[1598]: 2025-11-05 15:51:46.334 [INFO][3909] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2f362b42fd ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.457080 containerd[1598]: 2025-11-05 15:51:46.343 [INFO][3909] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.457161 containerd[1598]: 2025-11-05 15:51:46.359 [INFO][3909] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"d0421556-4619-489b-96b6-556923804205", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314", Pod:"goldmane-666569f655-84dz5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.101.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia2f362b42fd", MAC:"de:2d:c8:2d:a9:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:46.457226 containerd[1598]: 2025-11-05 15:51:46.444 [INFO][3909] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" Namespace="calico-system" Pod="goldmane-666569f655-84dz5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-goldmane--666569f655--84dz5-eth0" Nov 5 15:51:46.619187 containerd[1598]: time="2025-11-05T15:51:46.619126123Z" level=info msg="connecting to shim 52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c" address="unix:///run/containerd/s/7e37f7fded64a5f8208107e8cfcf4d8f9abcb428d1f487f0c67f95cdb8b4cd3a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:46.628103 containerd[1598]: time="2025-11-05T15:51:46.628039748Z" level=info msg="connecting to shim 5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314" address="unix:///run/containerd/s/7732c04cd1abb8b7e5872fb9cf82c5d3d3debcf6a5d84c34ba4d38785019b26f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:46.767044 systemd[1]: Started cri-containerd-52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c.scope - libcontainer container 52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c. Nov 5 15:51:46.770979 systemd[1]: Started cri-containerd-5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314.scope - libcontainer container 5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314. Nov 5 15:51:46.886259 kubelet[2769]: E1105 15:51:46.886138 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:46.888449 containerd[1598]: time="2025-11-05T15:51:46.887987964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59557cc4f4-hjzvn,Uid:1e405a49-8153-4577-b190-3b34d7fc5814,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:46.889039 containerd[1598]: time="2025-11-05T15:51:46.888796104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwh4z,Uid:d2415f61-2034-4efd-a40a-a585aaa31215,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:47.095429 containerd[1598]: time="2025-11-05T15:51:47.095032185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67b9b594f7-vkfhk,Uid:03cfcdc6-c1a2-47b4-849d-d54b33232d4e,Namespace:calico-system,Attempt:0,} returns sandbox id \"52f84a064306aec87b4ec64829bc982cdc573c614d6f52f03328bd349a20ce2c\"" Nov 5 15:51:47.163463 containerd[1598]: time="2025-11-05T15:51:47.163408105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:51:47.275643 systemd-networkd[1496]: calia2b59eed0b7: Link UP Nov 5 15:51:47.276890 systemd-networkd[1496]: calia2b59eed0b7: Gained carrier Nov 5 15:51:47.321673 containerd[1598]: 2025-11-05 15:51:46.961 [INFO][4036] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:51:47.321673 containerd[1598]: 2025-11-05 15:51:47.011 [INFO][4036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0 coredns-674b8bbfcf- kube-system d2415f61-2034-4efd-a40a-a585aaa31215 859 0 2025-11-05 15:51:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 coredns-674b8bbfcf-fwh4z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia2b59eed0b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-" Nov 5 15:51:47.321673 containerd[1598]: 2025-11-05 15:51:47.011 [INFO][4036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.321673 containerd[1598]: 2025-11-05 15:51:47.183 [INFO][4063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" HandleID="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.188 [INFO][4063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" HandleID="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-6-a291033793", "pod":"coredns-674b8bbfcf-fwh4z", "timestamp":"2025-11-05 15:51:47.183488334 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.190 [INFO][4063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.190 [INFO][4063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.190 [INFO][4063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.204 [INFO][4063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.211 [INFO][4063] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.218 [INFO][4063] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.221 [INFO][4063] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.322597 containerd[1598]: 2025-11-05 15:51:47.227 [INFO][4063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.227 [INFO][4063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.242 [INFO][4063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26 Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.249 [INFO][4063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.259 [INFO][4063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.3/26] block=192.168.101.0/26 handle="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.259 [INFO][4063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.3/26] handle="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.260 [INFO][4063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:47.323210 containerd[1598]: 2025-11-05 15:51:47.260 [INFO][4063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.3/26] IPv6=[] ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" HandleID="k8s-pod-network.5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.270 [INFO][4036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2415f61-2034-4efd-a40a-a585aaa31215", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"coredns-674b8bbfcf-fwh4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2b59eed0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.270 [INFO][4036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.3/32] ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.270 [INFO][4036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2b59eed0b7 ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.276 [INFO][4036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.277 [INFO][4036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2415f61-2034-4efd-a40a-a585aaa31215", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26", Pod:"coredns-674b8bbfcf-fwh4z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia2b59eed0b7", MAC:"42:76:be:f1:80:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:47.323442 containerd[1598]: 2025-11-05 15:51:47.313 [INFO][4036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" Namespace="kube-system" Pod="coredns-674b8bbfcf-fwh4z" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--fwh4z-eth0" Nov 5 15:51:47.370681 containerd[1598]: time="2025-11-05T15:51:47.370373439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-84dz5,Uid:d0421556-4619-489b-96b6-556923804205,Namespace:calico-system,Attempt:0,} returns sandbox id \"5fe6e0dc12bf96bee934b9b4732132e371f316ad3b82608e3b36e45bc3e7e314\"" Nov 5 15:51:47.392931 containerd[1598]: time="2025-11-05T15:51:47.392864342Z" level=info msg="connecting to shim 5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26" address="unix:///run/containerd/s/dff607cf4cf92c5208594e882dbba90c7724d60be3e055a17598b3a639122167" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:47.451350 systemd-networkd[1496]: cali56e02b74f63: Link UP Nov 5 15:51:47.456770 systemd-networkd[1496]: cali56e02b74f63: Gained carrier Nov 5 15:51:47.476016 systemd[1]: Started cri-containerd-5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26.scope - libcontainer container 5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26. Nov 5 15:51:47.488869 systemd-networkd[1496]: calia91ef472ce7: Gained IPv6LL Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.040 [INFO][4035] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.071 [INFO][4035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0 calico-kube-controllers-59557cc4f4- calico-system 1e405a49-8153-4577-b190-3b34d7fc5814 861 0 2025-11-05 15:51:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59557cc4f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 calico-kube-controllers-59557cc4f4-hjzvn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56e02b74f63 [] [] }} ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.071 [INFO][4035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.198 [INFO][4069] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" HandleID="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Workload="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.200 [INFO][4069] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" HandleID="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Workload="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5670), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-6-a291033793", "pod":"calico-kube-controllers-59557cc4f4-hjzvn", "timestamp":"2025-11-05 15:51:47.198911398 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.200 [INFO][4069] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.260 [INFO][4069] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.262 [INFO][4069] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.318 [INFO][4069] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.338 [INFO][4069] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.358 [INFO][4069] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.365 [INFO][4069] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.375 [INFO][4069] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.375 [INFO][4069] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.386 [INFO][4069] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155 Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.400 [INFO][4069] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.421 [INFO][4069] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.4/26] block=192.168.101.0/26 handle="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.421 [INFO][4069] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.4/26] handle="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.421 [INFO][4069] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:47.530375 containerd[1598]: 2025-11-05 15:51:47.421 [INFO][4069] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.4/26] IPv6=[] ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" HandleID="k8s-pod-network.8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Workload="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.437 [INFO][4035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0", GenerateName:"calico-kube-controllers-59557cc4f4-", Namespace:"calico-system", SelfLink:"", UID:"1e405a49-8153-4577-b190-3b34d7fc5814", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59557cc4f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"calico-kube-controllers-59557cc4f4-hjzvn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e02b74f63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.438 [INFO][4035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.4/32] ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.439 [INFO][4035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56e02b74f63 ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.457 [INFO][4035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.460 [INFO][4035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0", GenerateName:"calico-kube-controllers-59557cc4f4-", Namespace:"calico-system", SelfLink:"", UID:"1e405a49-8153-4577-b190-3b34d7fc5814", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59557cc4f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155", Pod:"calico-kube-controllers-59557cc4f4-hjzvn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.101.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56e02b74f63", MAC:"16:22:98:14:f5:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:47.532331 containerd[1598]: 2025-11-05 15:51:47.495 [INFO][4035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" Namespace="calico-system" Pod="calico-kube-controllers-59557cc4f4-hjzvn" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--kube--controllers--59557cc4f4--hjzvn-eth0" Nov 5 15:51:47.568990 containerd[1598]: time="2025-11-05T15:51:47.568901857Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:47.573031 containerd[1598]: time="2025-11-05T15:51:47.571738755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:51:47.573031 containerd[1598]: time="2025-11-05T15:51:47.571863138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:51:47.573296 kubelet[2769]: E1105 15:51:47.572696 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:47.573296 kubelet[2769]: E1105 15:51:47.572786 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:51:47.574975 containerd[1598]: time="2025-11-05T15:51:47.574926397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:51:47.593076 kubelet[2769]: E1105 15:51:47.591430 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:65c13a29b5444a168af617f3adffab47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:47.644168 containerd[1598]: time="2025-11-05T15:51:47.644005184Z" level=info msg="connecting to shim 8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155" address="unix:///run/containerd/s/a6fc274247b01db03093e0fecd156c4d8eb19acf0824440a47cebe3282fee81d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:47.739902 systemd[1]: Started cri-containerd-8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155.scope - libcontainer container 8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155. Nov 5 15:51:47.754068 containerd[1598]: time="2025-11-05T15:51:47.753515613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fwh4z,Uid:d2415f61-2034-4efd-a40a-a585aaa31215,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26\"" Nov 5 15:51:47.756489 kubelet[2769]: E1105 15:51:47.756450 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:47.773602 containerd[1598]: time="2025-11-05T15:51:47.772831269Z" level=info msg="CreateContainer within sandbox \"5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:51:47.802820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181201469.mount: Deactivated successfully. Nov 5 15:51:47.818178 containerd[1598]: time="2025-11-05T15:51:47.818107873Z" level=info msg="Container e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:47.824323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012503185.mount: Deactivated successfully. Nov 5 15:51:47.833514 containerd[1598]: time="2025-11-05T15:51:47.833465115Z" level=info msg="CreateContainer within sandbox \"5c942dadd764fbb15a27fd07dd3a365d2f30ca648a83ae0beb3e5f19d255bb26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088\"" Nov 5 15:51:47.835095 containerd[1598]: time="2025-11-05T15:51:47.834677651Z" level=info msg="StartContainer for \"e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088\"" Nov 5 15:51:47.837141 containerd[1598]: time="2025-11-05T15:51:47.837082684Z" level=info msg="connecting to shim e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088" address="unix:///run/containerd/s/dff607cf4cf92c5208594e882dbba90c7724d60be3e055a17598b3a639122167" protocol=ttrpc version=3 Nov 5 15:51:47.882777 systemd[1]: Started cri-containerd-e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088.scope - libcontainer container e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088. Nov 5 15:51:47.896887 containerd[1598]: time="2025-11-05T15:51:47.896099287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-8xkrl,Uid:165fdd14-70f6-41d7-a608-5c88252d2d07,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:51:47.976344 containerd[1598]: time="2025-11-05T15:51:47.976263500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:47.979472 containerd[1598]: time="2025-11-05T15:51:47.978750568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:51:47.979472 containerd[1598]: time="2025-11-05T15:51:47.979035450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:47.979764 kubelet[2769]: E1105 15:51:47.979550 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:47.979764 kubelet[2769]: E1105 15:51:47.979600 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:51:47.980316 kubelet[2769]: E1105 15:51:47.980053 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmmlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-84dz5_calico-system(d0421556-4619-489b-96b6-556923804205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:47.981993 containerd[1598]: time="2025-11-05T15:51:47.980906346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:51:47.982310 kubelet[2769]: E1105 15:51:47.981595 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:51:48.006051 containerd[1598]: time="2025-11-05T15:51:48.005751821Z" level=info msg="StartContainer for \"e55c3612e907ab1a540e17d22d0cf1d0b6303c0746781513670d4ceb5aca1088\" returns successfully" Nov 5 15:51:48.222696 kubelet[2769]: E1105 15:51:48.222318 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:48.227520 kubelet[2769]: E1105 15:51:48.227470 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:51:48.256239 systemd-networkd[1496]: cali3ca54011fac: Link UP Nov 5 15:51:48.256864 systemd-networkd[1496]: cali3ca54011fac: Gained carrier Nov 5 15:51:48.329902 kubelet[2769]: I1105 15:51:48.329698 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fwh4z" podStartSLOduration=42.329499486 podStartE2EDuration="42.329499486s" podCreationTimestamp="2025-11-05 15:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:48.321412142 +0000 UTC m=+48.604718747" watchObservedRunningTime="2025-11-05 15:51:48.329499486 +0000 UTC m=+48.612806084" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.008 [INFO][4223] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.029 [INFO][4223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0 calico-apiserver-69cd9bb6f5- calico-apiserver 165fdd14-70f6-41d7-a608-5c88252d2d07 856 0 2025-11-05 15:51:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69cd9bb6f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 calico-apiserver-69cd9bb6f5-8xkrl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3ca54011fac [] [] }} ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.030 [INFO][4223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.114 [INFO][4246] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" HandleID="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.117 [INFO][4246] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" HandleID="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-6-a291033793", "pod":"calico-apiserver-69cd9bb6f5-8xkrl", "timestamp":"2025-11-05 15:51:48.114676464 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.117 [INFO][4246] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.117 [INFO][4246] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.117 [INFO][4246] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.146 [INFO][4246] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.161 [INFO][4246] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.170 [INFO][4246] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.176 [INFO][4246] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.187 [INFO][4246] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.189 [INFO][4246] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.192 [INFO][4246] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.209 [INFO][4246] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.235 [INFO][4246] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.5/26] block=192.168.101.0/26 handle="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.235 [INFO][4246] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.5/26] handle="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.236 [INFO][4246] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:48.338608 containerd[1598]: 2025-11-05 15:51:48.236 [INFO][4246] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.5/26] IPv6=[] ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" HandleID="k8s-pod-network.88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.243 [INFO][4223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0", GenerateName:"calico-apiserver-69cd9bb6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"165fdd14-70f6-41d7-a608-5c88252d2d07", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69cd9bb6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"calico-apiserver-69cd9bb6f5-8xkrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ca54011fac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.243 [INFO][4223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.5/32] ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.243 [INFO][4223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ca54011fac ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.256 [INFO][4223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.258 [INFO][4223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0", GenerateName:"calico-apiserver-69cd9bb6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"165fdd14-70f6-41d7-a608-5c88252d2d07", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69cd9bb6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e", Pod:"calico-apiserver-69cd9bb6f5-8xkrl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ca54011fac", MAC:"3a:2d:dd:35:82:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:48.340684 containerd[1598]: 2025-11-05 15:51:48.319 [INFO][4223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-8xkrl" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--8xkrl-eth0" Nov 5 15:51:48.343417 containerd[1598]: time="2025-11-05T15:51:48.343283481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59557cc4f4-hjzvn,Uid:1e405a49-8153-4577-b190-3b34d7fc5814,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c6a554a9ad36166c8d739412b4c51cc804bf417b73e4f0c89266d657649b155\"" Nov 5 15:51:48.348671 containerd[1598]: time="2025-11-05T15:51:48.347703643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:48.350369 containerd[1598]: time="2025-11-05T15:51:48.350258232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:51:48.350948 containerd[1598]: time="2025-11-05T15:51:48.350860117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:48.352979 kubelet[2769]: E1105 15:51:48.352919 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:48.352979 kubelet[2769]: E1105 15:51:48.352976 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:51:48.353495 kubelet[2769]: E1105 15:51:48.353434 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:48.354736 kubelet[2769]: E1105 15:51:48.354556 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:51:48.355272 containerd[1598]: time="2025-11-05T15:51:48.355051377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:51:48.379843 systemd-networkd[1496]: calia2f362b42fd: Gained IPv6LL Nov 5 15:51:48.405654 containerd[1598]: time="2025-11-05T15:51:48.405535805Z" level=info msg="connecting to shim 88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e" address="unix:///run/containerd/s/7fb35853803c9c3f3b2fbbee3f1b49a1140698de7e000f7eb26a85df9108f319" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:48.456999 systemd[1]: Started cri-containerd-88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e.scope - libcontainer container 88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e. Nov 5 15:51:48.698607 containerd[1598]: time="2025-11-05T15:51:48.698563725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:48.701774 containerd[1598]: time="2025-11-05T15:51:48.701697935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:51:48.701975 containerd[1598]: time="2025-11-05T15:51:48.701746208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:51:48.704369 kubelet[2769]: E1105 15:51:48.704069 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:48.704369 kubelet[2769]: E1105 15:51:48.704137 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:51:48.708918 kubelet[2769]: E1105 15:51:48.708623 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8hsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59557cc4f4-hjzvn_calico-system(1e405a49-8153-4577-b190-3b34d7fc5814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:48.710911 kubelet[2769]: E1105 15:51:48.710829 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:51:48.716991 containerd[1598]: time="2025-11-05T15:51:48.716461220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-8xkrl,Uid:165fdd14-70f6-41d7-a608-5c88252d2d07,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"88fe55aeece702980a7e3b52b7217707a9cd3cdf6ef863af8c54b432f48fc47e\"" Nov 5 15:51:48.722924 containerd[1598]: time="2025-11-05T15:51:48.722860154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:48.828941 systemd-networkd[1496]: cali56e02b74f63: Gained IPv6LL Nov 5 15:51:48.885002 containerd[1598]: time="2025-11-05T15:51:48.884954184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69q97,Uid:be0a8e42-97b5-40e7-95d6-3baf83ea6dbb,Namespace:calico-system,Attempt:0,}" Nov 5 15:51:49.077580 systemd-networkd[1496]: cali0e2f9cc071f: Link UP Nov 5 15:51:49.079263 systemd-networkd[1496]: cali0e2f9cc071f: Gained carrier Nov 5 15:51:49.083785 systemd-networkd[1496]: calia2b59eed0b7: Gained IPv6LL Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:48.951 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0 csi-node-driver- calico-system be0a8e42-97b5-40e7-95d6-3baf83ea6dbb 736 0 2025-11-05 15:51:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 csi-node-driver-69q97 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e2f9cc071f [] [] }} ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:48.952 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.001 [INFO][4344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" HandleID="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Workload="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.002 [INFO][4344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" HandleID="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Workload="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003498c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487.0.1-6-a291033793", "pod":"csi-node-driver-69q97", "timestamp":"2025-11-05 15:51:49.001179112 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.002 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.002 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.002 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.014 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.026 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.039 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.044 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.047 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.047 [INFO][4344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.049 [INFO][4344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1 Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.056 [INFO][4344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.064 [INFO][4344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.6/26] block=192.168.101.0/26 handle="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.064 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.6/26] handle="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.064 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:49.112743 containerd[1598]: 2025-11-05 15:51:49.064 [INFO][4344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.6/26] IPv6=[] ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" HandleID="k8s-pod-network.7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Workload="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.068 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"csi-node-driver-69q97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e2f9cc071f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.068 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.6/32] ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.068 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e2f9cc071f ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.080 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.082 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"be0a8e42-97b5-40e7-95d6-3baf83ea6dbb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1", Pod:"csi-node-driver-69q97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e2f9cc071f", MAC:"22:69:a1:ec:5f:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:49.115072 containerd[1598]: 2025-11-05 15:51:49.098 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" Namespace="calico-system" Pod="csi-node-driver-69q97" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-csi--node--driver--69q97-eth0" Nov 5 15:51:49.153031 containerd[1598]: time="2025-11-05T15:51:49.152846701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:49.154566 containerd[1598]: time="2025-11-05T15:51:49.154510901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:49.155266 containerd[1598]: time="2025-11-05T15:51:49.154962770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:49.156358 kubelet[2769]: E1105 15:51:49.155830 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:49.156358 kubelet[2769]: E1105 15:51:49.155904 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:49.156358 kubelet[2769]: E1105 15:51:49.156125 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-8xkrl_calico-apiserver(165fdd14-70f6-41d7-a608-5c88252d2d07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:49.158250 kubelet[2769]: E1105 15:51:49.158147 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:51:49.160295 containerd[1598]: time="2025-11-05T15:51:49.160224248Z" level=info msg="connecting to shim 7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1" address="unix:///run/containerd/s/4a4454f3467975d6b568e52f8fdd490e11082e8b234aa26e048c964e7eb6fb30" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:49.211134 systemd[1]: Started cri-containerd-7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1.scope - libcontainer container 7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1. Nov 5 15:51:49.227268 systemd-networkd[1496]: vxlan.calico: Link UP Nov 5 15:51:49.227279 systemd-networkd[1496]: vxlan.calico: Gained carrier Nov 5 15:51:49.240129 kubelet[2769]: E1105 15:51:49.240006 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:51:49.244154 kubelet[2769]: E1105 15:51:49.244113 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:49.248019 kubelet[2769]: E1105 15:51:49.247906 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:51:49.250834 kubelet[2769]: E1105 15:51:49.248419 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:51:49.252035 kubelet[2769]: E1105 15:51:49.251957 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:51:49.370789 containerd[1598]: time="2025-11-05T15:51:49.370567149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-69q97,Uid:be0a8e42-97b5-40e7-95d6-3baf83ea6dbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c80171a16ef24f79205d254840596d707cee9f2940ecb8a633786d61cddf4e1\"" Nov 5 15:51:49.374829 containerd[1598]: time="2025-11-05T15:51:49.374783907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:51:49.809541 containerd[1598]: time="2025-11-05T15:51:49.809495357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:49.811311 containerd[1598]: time="2025-11-05T15:51:49.811106672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:51:49.813101 containerd[1598]: time="2025-11-05T15:51:49.813069721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:51:49.814790 kubelet[2769]: E1105 15:51:49.814740 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:49.814889 kubelet[2769]: E1105 15:51:49.814822 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:51:49.815348 kubelet[2769]: E1105 15:51:49.815279 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:49.819500 containerd[1598]: time="2025-11-05T15:51:49.818923064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:51:49.853018 systemd-networkd[1496]: cali3ca54011fac: Gained IPv6LL Nov 5 15:51:49.884978 kubelet[2769]: E1105 15:51:49.884935 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:49.886322 containerd[1598]: time="2025-11-05T15:51:49.886281364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2gdt,Uid:899384b2-cd0c-4539-b89b-fa912eceabb8,Namespace:kube-system,Attempt:0,}" Nov 5 15:51:50.064910 systemd-networkd[1496]: calia42abe78f7c: Link UP Nov 5 15:51:50.067794 systemd-networkd[1496]: calia42abe78f7c: Gained carrier Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:49.948 [INFO][4478] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0 coredns-674b8bbfcf- kube-system 899384b2-cd0c-4539-b89b-fa912eceabb8 853 0 2025-11-05 15:51:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 coredns-674b8bbfcf-t2gdt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia42abe78f7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:49.949 [INFO][4478] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.002 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" HandleID="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.002 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" HandleID="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487.0.1-6-a291033793", "pod":"coredns-674b8bbfcf-t2gdt", "timestamp":"2025-11-05 15:51:50.002070321 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.002 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.002 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.002 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.012 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.018 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.024 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.027 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.031 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.031 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.034 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.041 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.051 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.7/26] block=192.168.101.0/26 handle="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.051 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.7/26] handle="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.052 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:50.090830 containerd[1598]: 2025-11-05 15:51:50.052 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.7/26] IPv6=[] ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" HandleID="k8s-pod-network.72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Workload="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.056 [INFO][4478] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899384b2-cd0c-4539-b89b-fa912eceabb8", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"coredns-674b8bbfcf-t2gdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42abe78f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.056 [INFO][4478] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.7/32] ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.056 [INFO][4478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia42abe78f7c ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.067 [INFO][4478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.068 [INFO][4478] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899384b2-cd0c-4539-b89b-fa912eceabb8", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae", Pod:"coredns-674b8bbfcf-t2gdt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.101.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia42abe78f7c", MAC:"7e:ec:fc:7d:b9:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:50.092433 containerd[1598]: 2025-11-05 15:51:50.083 [INFO][4478] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" Namespace="kube-system" Pod="coredns-674b8bbfcf-t2gdt" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-coredns--674b8bbfcf--t2gdt-eth0" Nov 5 15:51:50.126670 containerd[1598]: time="2025-11-05T15:51:50.125974685Z" level=info msg="connecting to shim 72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae" address="unix:///run/containerd/s/f64052087dc03abd44d087f96f884e03cef478574e8c37ac29bf0601e1d69f8d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:50.162934 systemd[1]: Started cri-containerd-72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae.scope - libcontainer container 72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae. Nov 5 15:51:50.241427 containerd[1598]: time="2025-11-05T15:51:50.241258136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t2gdt,Uid:899384b2-cd0c-4539-b89b-fa912eceabb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae\"" Nov 5 15:51:50.243711 kubelet[2769]: E1105 15:51:50.243383 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:50.250229 containerd[1598]: time="2025-11-05T15:51:50.250169556Z" level=info msg="CreateContainer within sandbox \"72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:51:50.267000 kubelet[2769]: E1105 15:51:50.266843 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:50.272103 kubelet[2769]: E1105 15:51:50.272034 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:51:50.280075 kubelet[2769]: E1105 15:51:50.279982 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:51:50.284995 containerd[1598]: time="2025-11-05T15:51:50.284515085Z" level=info msg="Container bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:51:50.297322 containerd[1598]: time="2025-11-05T15:51:50.297271228Z" level=info msg="CreateContainer within sandbox \"72b7fef2a78eba0bf321cb1b41cb49ccc171cde7df92a1a9a33806b01daa33ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d\"" Nov 5 15:51:50.298513 containerd[1598]: time="2025-11-05T15:51:50.298437514Z" level=info msg="StartContainer for \"bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d\"" Nov 5 15:51:50.303014 containerd[1598]: time="2025-11-05T15:51:50.302936185Z" level=info msg="connecting to shim bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d" address="unix:///run/containerd/s/f64052087dc03abd44d087f96f884e03cef478574e8c37ac29bf0601e1d69f8d" protocol=ttrpc version=3 Nov 5 15:51:50.312481 containerd[1598]: time="2025-11-05T15:51:50.312388501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:50.315690 containerd[1598]: time="2025-11-05T15:51:50.315471685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:51:50.319083 containerd[1598]: time="2025-11-05T15:51:50.317705772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:51:50.320380 kubelet[2769]: E1105 15:51:50.320240 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:50.320380 kubelet[2769]: E1105 15:51:50.320308 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:51:50.321881 kubelet[2769]: E1105 15:51:50.321732 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:50.323815 kubelet[2769]: E1105 15:51:50.323735 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:50.366902 systemd[1]: Started cri-containerd-bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d.scope - libcontainer container bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d. Nov 5 15:51:50.446812 containerd[1598]: time="2025-11-05T15:51:50.446768558Z" level=info msg="StartContainer for \"bbb5e17c0e5e83e3ca7b0d7b28c70cb5386eb495de85c842472e5c04be9fc97d\" returns successfully" Nov 5 15:51:50.884115 containerd[1598]: time="2025-11-05T15:51:50.884039198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-kptv5,Uid:2171a61d-ffd4-4f1c-8106-ddf8826eef75,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:51:50.940789 systemd-networkd[1496]: cali0e2f9cc071f: Gained IPv6LL Nov 5 15:51:51.048753 systemd-networkd[1496]: cali56f35d0ff25: Link UP Nov 5 15:51:51.049945 systemd-networkd[1496]: cali56f35d0ff25: Gained carrier Nov 5 15:51:51.071764 systemd-networkd[1496]: vxlan.calico: Gained IPv6LL Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.952 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0 calico-apiserver-69cd9bb6f5- calico-apiserver 2171a61d-ffd4-4f1c-8106-ddf8826eef75 858 0 2025-11-05 15:51:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69cd9bb6f5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487.0.1-6-a291033793 calico-apiserver-69cd9bb6f5-kptv5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56f35d0ff25 [] [] }} ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.952 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.994 [INFO][4597] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" HandleID="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.994 [INFO][4597] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" HandleID="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487.0.1-6-a291033793", "pod":"calico-apiserver-69cd9bb6f5-kptv5", "timestamp":"2025-11-05 15:51:50.994349546 +0000 UTC"}, Hostname:"ci-4487.0.1-6-a291033793", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.994 [INFO][4597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.994 [INFO][4597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:50.994 [INFO][4597] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487.0.1-6-a291033793' Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.004 [INFO][4597] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.012 [INFO][4597] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.019 [INFO][4597] ipam/ipam.go 511: Trying affinity for 192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.021 [INFO][4597] ipam/ipam.go 158: Attempting to load block cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.025 [INFO][4597] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.101.0/26 host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.025 [INFO][4597] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.101.0/26 handle="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.027 [INFO][4597] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88 Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.033 [INFO][4597] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.101.0/26 handle="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.040 [INFO][4597] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.101.8/26] block=192.168.101.0/26 handle="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.040 [INFO][4597] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.101.8/26] handle="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" host="ci-4487.0.1-6-a291033793" Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.041 [INFO][4597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:51:51.077415 containerd[1598]: 2025-11-05 15:51:51.041 [INFO][4597] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.101.8/26] IPv6=[] ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" HandleID="k8s-pod-network.5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Workload="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.044 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0", GenerateName:"calico-apiserver-69cd9bb6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2171a61d-ffd4-4f1c-8106-ddf8826eef75", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69cd9bb6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"", Pod:"calico-apiserver-69cd9bb6f5-kptv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56f35d0ff25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.045 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.101.8/32] ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.045 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56f35d0ff25 ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.050 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.051 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0", GenerateName:"calico-apiserver-69cd9bb6f5-", Namespace:"calico-apiserver", SelfLink:"", UID:"2171a61d-ffd4-4f1c-8106-ddf8826eef75", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69cd9bb6f5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487.0.1-6-a291033793", ContainerID:"5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88", Pod:"calico-apiserver-69cd9bb6f5-kptv5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.101.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56f35d0ff25", MAC:"02:40:c3:ab:d2:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:51:51.078474 containerd[1598]: 2025-11-05 15:51:51.065 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" Namespace="calico-apiserver" Pod="calico-apiserver-69cd9bb6f5-kptv5" WorkloadEndpoint="ci--4487.0.1--6--a291033793-k8s-calico--apiserver--69cd9bb6f5--kptv5-eth0" Nov 5 15:51:51.128828 containerd[1598]: time="2025-11-05T15:51:51.128543800Z" level=info msg="connecting to shim 5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88" address="unix:///run/containerd/s/c80bc159ce4262f898059cea6a5c30716f2f672354445b4c10b0ce64458080fc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:51:51.180449 systemd[1]: Started cri-containerd-5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88.scope - libcontainer container 5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88. Nov 5 15:51:51.280491 kubelet[2769]: E1105 15:51:51.279478 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:51.285513 kubelet[2769]: E1105 15:51:51.285270 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:51:51.298753 containerd[1598]: time="2025-11-05T15:51:51.298584175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69cd9bb6f5-kptv5,Uid:2171a61d-ffd4-4f1c-8106-ddf8826eef75,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5464831a0f77910dd61a2ffb0fd9ab397b80b4631720eefcc6a3622b00d07f88\"" Nov 5 15:51:51.302603 containerd[1598]: time="2025-11-05T15:51:51.302555744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:51:51.326155 kubelet[2769]: I1105 15:51:51.326083 2769 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t2gdt" podStartSLOduration=45.326060362 podStartE2EDuration="45.326060362s" podCreationTimestamp="2025-11-05 15:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:51:51.32569459 +0000 UTC m=+51.609001187" watchObservedRunningTime="2025-11-05 15:51:51.326060362 +0000 UTC m=+51.609366951" Nov 5 15:51:51.624743 containerd[1598]: time="2025-11-05T15:51:51.624612031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:51:51.625756 containerd[1598]: time="2025-11-05T15:51:51.625604031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:51:51.625756 containerd[1598]: time="2025-11-05T15:51:51.625701009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:51:51.626206 kubelet[2769]: E1105 15:51:51.626153 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:51.626296 kubelet[2769]: E1105 15:51:51.626220 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:51:51.626703 kubelet[2769]: E1105 15:51:51.626526 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lg2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-kptv5_calico-apiserver(2171a61d-ffd4-4f1c-8106-ddf8826eef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:51:51.628087 kubelet[2769]: E1105 15:51:51.627950 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:51:52.091887 systemd-networkd[1496]: calia42abe78f7c: Gained IPv6LL Nov 5 15:51:52.285665 kubelet[2769]: E1105 15:51:52.285591 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:52.287921 kubelet[2769]: E1105 15:51:52.287886 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:51:52.372754 kubelet[2769]: I1105 15:51:52.370116 2769 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:51:52.372754 kubelet[2769]: E1105 15:51:52.371473 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:52.478898 systemd-networkd[1496]: cali56f35d0ff25: Gained IPv6LL Nov 5 15:51:52.582288 containerd[1598]: time="2025-11-05T15:51:52.582212400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\" id:\"8e37f79654f1d683809011f856a3a09e73b8beb44793db859b34a135ca584bd2\" pid:4674 exited_at:{seconds:1762357912 nanos:574694975}" Nov 5 15:51:52.740248 containerd[1598]: time="2025-11-05T15:51:52.740070890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\" id:\"46a21491a26c7ca7674ccd80780cf7e4a96faace943fce3d62a6bc214695068d\" pid:4699 exited_at:{seconds:1762357912 nanos:739670221}" Nov 5 15:51:53.287574 kubelet[2769]: E1105 15:51:53.287518 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:53.289010 kubelet[2769]: E1105 15:51:53.288465 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:51:53.290277 kubelet[2769]: E1105 15:51:53.290209 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:00.886797 containerd[1598]: time="2025-11-05T15:52:00.885922255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:52:01.235110 containerd[1598]: time="2025-11-05T15:52:01.234782576Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:01.236298 containerd[1598]: time="2025-11-05T15:52:01.236121014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:52:01.236298 containerd[1598]: time="2025-11-05T15:52:01.236167211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:01.237612 kubelet[2769]: E1105 15:52:01.236719 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:01.237612 kubelet[2769]: E1105 15:52:01.236804 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:01.237612 kubelet[2769]: E1105 15:52:01.237009 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-8xkrl_calico-apiserver(165fdd14-70f6-41d7-a608-5c88252d2d07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:01.239264 kubelet[2769]: E1105 15:52:01.239181 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:52:02.886112 containerd[1598]: time="2025-11-05T15:52:02.885317967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:52:03.220998 containerd[1598]: time="2025-11-05T15:52:03.220626068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:03.222314 containerd[1598]: time="2025-11-05T15:52:03.222174243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:52:03.222314 containerd[1598]: time="2025-11-05T15:52:03.222257381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:03.223039 kubelet[2769]: E1105 15:52:03.222779 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:03.223039 kubelet[2769]: E1105 15:52:03.222849 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:03.224482 kubelet[2769]: E1105 15:52:03.224027 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmmlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-84dz5_calico-system(d0421556-4619-489b-96b6-556923804205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:03.225934 containerd[1598]: time="2025-11-05T15:52:03.223672412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:52:03.226090 kubelet[2769]: E1105 15:52:03.225673 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:52:03.561126 containerd[1598]: time="2025-11-05T15:52:03.560935297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:03.562318 containerd[1598]: time="2025-11-05T15:52:03.562207661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:52:03.562471 containerd[1598]: time="2025-11-05T15:52:03.562245398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:52:03.562985 kubelet[2769]: E1105 15:52:03.562930 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:52:03.563079 kubelet[2769]: E1105 15:52:03.563003 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:52:03.563376 kubelet[2769]: E1105 15:52:03.563327 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8hsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59557cc4f4-hjzvn_calico-system(1e405a49-8153-4577-b190-3b34d7fc5814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:03.563961 containerd[1598]: time="2025-11-05T15:52:03.563885160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:52:03.564813 kubelet[2769]: E1105 15:52:03.564766 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:52:04.071764 containerd[1598]: time="2025-11-05T15:52:04.071696619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:04.073478 containerd[1598]: time="2025-11-05T15:52:04.073375900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:52:04.073478 containerd[1598]: time="2025-11-05T15:52:04.073459931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:52:04.075241 kubelet[2769]: E1105 15:52:04.075118 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:04.075322 kubelet[2769]: E1105 15:52:04.075293 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:04.076417 kubelet[2769]: E1105 15:52:04.076365 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:65c13a29b5444a168af617f3adffab47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:04.080117 containerd[1598]: time="2025-11-05T15:52:04.080078309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:52:04.471811 containerd[1598]: time="2025-11-05T15:52:04.470827010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:04.473781 containerd[1598]: time="2025-11-05T15:52:04.473219953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:52:04.474229 containerd[1598]: time="2025-11-05T15:52:04.473389052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:52:04.474666 kubelet[2769]: E1105 15:52:04.474445 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:04.475983 kubelet[2769]: E1105 15:52:04.475714 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:04.476448 kubelet[2769]: E1105 15:52:04.476274 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:04.478148 kubelet[2769]: E1105 15:52:04.478072 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:52:06.788719 systemd[1]: Started sshd@7-143.110.239.237:22-139.178.68.195:57430.service - OpenSSH per-connection server daemon (139.178.68.195:57430). Nov 5 15:52:06.888072 containerd[1598]: time="2025-11-05T15:52:06.887448589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:52:06.974372 sshd[4738]: Accepted publickey for core from 139.178.68.195 port 57430 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:06.976975 sshd-session[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:06.993968 systemd-logind[1574]: New session 8 of user core. Nov 5 15:52:07.003215 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:52:07.244975 containerd[1598]: time="2025-11-05T15:52:07.244833900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:07.246750 containerd[1598]: time="2025-11-05T15:52:07.246665087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:52:07.246750 containerd[1598]: time="2025-11-05T15:52:07.246671351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:52:07.247918 kubelet[2769]: E1105 15:52:07.247822 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:52:07.247918 kubelet[2769]: E1105 15:52:07.247882 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:52:07.248491 kubelet[2769]: E1105 15:52:07.248045 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:07.250889 containerd[1598]: time="2025-11-05T15:52:07.250771631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:52:07.599465 containerd[1598]: time="2025-11-05T15:52:07.598742816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:07.599811 containerd[1598]: time="2025-11-05T15:52:07.599745939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:52:07.600058 containerd[1598]: time="2025-11-05T15:52:07.599783597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:52:07.601229 kubelet[2769]: E1105 15:52:07.601103 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:52:07.603566 kubelet[2769]: E1105 15:52:07.601424 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:52:07.605446 kubelet[2769]: E1105 15:52:07.605209 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:07.606905 kubelet[2769]: E1105 15:52:07.606695 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:52:07.905236 sshd[4743]: Connection closed by 139.178.68.195 port 57430 Nov 5 15:52:07.907356 sshd-session[4738]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:07.917859 systemd[1]: sshd@7-143.110.239.237:22-139.178.68.195:57430.service: Deactivated successfully. Nov 5 15:52:07.923373 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:52:07.926716 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:52:07.931761 systemd-logind[1574]: Removed session 8. Nov 5 15:52:08.886626 containerd[1598]: time="2025-11-05T15:52:08.886573185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:52:09.244279 containerd[1598]: time="2025-11-05T15:52:09.244049915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:09.245982 containerd[1598]: time="2025-11-05T15:52:09.245762196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:09.246809 containerd[1598]: time="2025-11-05T15:52:09.245815160Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:52:09.246975 kubelet[2769]: E1105 15:52:09.246934 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:09.247614 kubelet[2769]: E1105 15:52:09.246989 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:09.247614 kubelet[2769]: E1105 15:52:09.247155 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lg2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-kptv5_calico-apiserver(2171a61d-ffd4-4f1c-8106-ddf8826eef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:09.248339 kubelet[2769]: E1105 15:52:09.248302 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:12.883938 kubelet[2769]: E1105 15:52:12.883831 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:12.930462 systemd[1]: Started sshd@8-143.110.239.237:22-139.178.68.195:57438.service - OpenSSH per-connection server daemon (139.178.68.195:57438). Nov 5 15:52:13.063673 sshd[4766]: Accepted publickey for core from 139.178.68.195 port 57438 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:13.065145 sshd-session[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:13.075461 systemd-logind[1574]: New session 9 of user core. Nov 5 15:52:13.085784 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:52:13.312805 sshd[4769]: Connection closed by 139.178.68.195 port 57438 Nov 5 15:52:13.313610 sshd-session[4766]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:13.321948 systemd[1]: sshd@8-143.110.239.237:22-139.178.68.195:57438.service: Deactivated successfully. Nov 5 15:52:13.326246 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:52:13.330319 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:52:13.334157 systemd-logind[1574]: Removed session 9. Nov 5 15:52:14.886465 kubelet[2769]: E1105 15:52:14.884820 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:52:15.888765 kubelet[2769]: E1105 15:52:15.886892 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:52:17.886052 kubelet[2769]: E1105 15:52:17.885999 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:52:18.333784 systemd[1]: Started sshd@9-143.110.239.237:22-139.178.68.195:55250.service - OpenSSH per-connection server daemon (139.178.68.195:55250). Nov 5 15:52:18.406518 sshd[4783]: Accepted publickey for core from 139.178.68.195 port 55250 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:18.410430 sshd-session[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:18.420677 systemd-logind[1574]: New session 10 of user core. Nov 5 15:52:18.426960 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:52:18.617001 sshd[4786]: Connection closed by 139.178.68.195 port 55250 Nov 5 15:52:18.619846 sshd-session[4783]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:18.635369 systemd[1]: sshd@9-143.110.239.237:22-139.178.68.195:55250.service: Deactivated successfully. Nov 5 15:52:18.641539 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:52:18.645072 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:52:18.651601 systemd-logind[1574]: Removed session 10. Nov 5 15:52:18.656392 systemd[1]: Started sshd@10-143.110.239.237:22-139.178.68.195:55256.service - OpenSSH per-connection server daemon (139.178.68.195:55256). Nov 5 15:52:18.775291 sshd[4801]: Accepted publickey for core from 139.178.68.195 port 55256 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:18.777576 sshd-session[4801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:18.787780 systemd-logind[1574]: New session 11 of user core. Nov 5 15:52:18.793018 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:52:18.884498 kubelet[2769]: E1105 15:52:18.884271 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:19.098306 sshd[4804]: Connection closed by 139.178.68.195 port 55256 Nov 5 15:52:19.099362 sshd-session[4801]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:19.126077 systemd[1]: sshd@10-143.110.239.237:22-139.178.68.195:55256.service: Deactivated successfully. Nov 5 15:52:19.131857 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:52:19.138077 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:52:19.147168 systemd[1]: Started sshd@11-143.110.239.237:22-139.178.68.195:55260.service - OpenSSH per-connection server daemon (139.178.68.195:55260). Nov 5 15:52:19.149784 systemd-logind[1574]: Removed session 11. Nov 5 15:52:19.253672 sshd[4814]: Accepted publickey for core from 139.178.68.195 port 55260 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:19.256141 sshd-session[4814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:19.264885 systemd-logind[1574]: New session 12 of user core. Nov 5 15:52:19.273897 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:52:19.497168 sshd[4818]: Connection closed by 139.178.68.195 port 55260 Nov 5 15:52:19.497303 sshd-session[4814]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:19.507899 systemd[1]: sshd@11-143.110.239.237:22-139.178.68.195:55260.service: Deactivated successfully. Nov 5 15:52:19.512780 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:52:19.516751 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:52:19.519279 systemd-logind[1574]: Removed session 12. Nov 5 15:52:19.888690 kubelet[2769]: E1105 15:52:19.888624 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:52:20.885165 kubelet[2769]: E1105 15:52:20.884891 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:21.901841 kubelet[2769]: E1105 15:52:21.901756 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:52:22.727300 containerd[1598]: time="2025-11-05T15:52:22.727229332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\" id:\"5e55653809ff2786388854dd60a48d427d8f530b7e091d0b3fd6f37b13177f89\" pid:4843 exited_at:{seconds:1762357942 nanos:726347083}" Nov 5 15:52:24.516573 systemd[1]: Started sshd@12-143.110.239.237:22-139.178.68.195:46878.service - OpenSSH per-connection server daemon (139.178.68.195:46878). Nov 5 15:52:24.682970 sshd[4860]: Accepted publickey for core from 139.178.68.195 port 46878 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:24.686516 sshd-session[4860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:24.697942 systemd-logind[1574]: New session 13 of user core. Nov 5 15:52:24.705133 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:52:24.910864 sshd[4863]: Connection closed by 139.178.68.195 port 46878 Nov 5 15:52:24.912003 sshd-session[4860]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:24.920275 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:52:24.921372 systemd[1]: sshd@12-143.110.239.237:22-139.178.68.195:46878.service: Deactivated successfully. Nov 5 15:52:24.925813 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:52:24.930606 systemd-logind[1574]: Removed session 13. Nov 5 15:52:25.887367 containerd[1598]: time="2025-11-05T15:52:25.887305717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:52:26.250522 containerd[1598]: time="2025-11-05T15:52:26.250368198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:26.251528 containerd[1598]: time="2025-11-05T15:52:26.251466956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:52:26.251669 containerd[1598]: time="2025-11-05T15:52:26.251559184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:26.252083 kubelet[2769]: E1105 15:52:26.252023 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:26.252083 kubelet[2769]: E1105 15:52:26.252080 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:52:26.252944 kubelet[2769]: E1105 15:52:26.252722 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gmmlq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-84dz5_calico-system(d0421556-4619-489b-96b6-556923804205): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:26.253940 kubelet[2769]: E1105 15:52:26.253900 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:52:26.884481 kubelet[2769]: E1105 15:52:26.884429 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:28.887556 containerd[1598]: time="2025-11-05T15:52:28.887502702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:52:29.270607 containerd[1598]: time="2025-11-05T15:52:29.270548084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:29.271653 containerd[1598]: time="2025-11-05T15:52:29.271588691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:52:29.271893 containerd[1598]: time="2025-11-05T15:52:29.271814374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:29.272373 kubelet[2769]: E1105 15:52:29.272250 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:29.272373 kubelet[2769]: E1105 15:52:29.272330 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:29.274516 kubelet[2769]: E1105 15:52:29.274178 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lt4mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-8xkrl_calico-apiserver(165fdd14-70f6-41d7-a608-5c88252d2d07): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:29.275884 kubelet[2769]: E1105 15:52:29.275829 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:52:29.886668 kubelet[2769]: E1105 15:52:29.885123 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:29.940037 systemd[1]: Started sshd@13-143.110.239.237:22-139.178.68.195:46880.service - OpenSSH per-connection server daemon (139.178.68.195:46880). Nov 5 15:52:30.128042 sshd[4878]: Accepted publickey for core from 139.178.68.195 port 46880 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:30.130278 sshd-session[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:30.139121 systemd-logind[1574]: New session 14 of user core. Nov 5 15:52:30.146984 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:52:30.397707 sshd[4884]: Connection closed by 139.178.68.195 port 46880 Nov 5 15:52:30.402716 sshd-session[4878]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:30.415112 systemd[1]: sshd@13-143.110.239.237:22-139.178.68.195:46880.service: Deactivated successfully. Nov 5 15:52:30.420755 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:52:30.424611 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:52:30.429546 systemd-logind[1574]: Removed session 14. Nov 5 15:52:32.889262 containerd[1598]: time="2025-11-05T15:52:32.889132644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:52:33.242122 containerd[1598]: time="2025-11-05T15:52:33.241964470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:33.244544 containerd[1598]: time="2025-11-05T15:52:33.244439806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:52:33.246859 containerd[1598]: time="2025-11-05T15:52:33.244440807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:52:33.247590 kubelet[2769]: E1105 15:52:33.247536 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:52:33.248005 kubelet[2769]: E1105 15:52:33.247626 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:52:33.248865 containerd[1598]: time="2025-11-05T15:52:33.248239687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:52:33.248959 kubelet[2769]: E1105 15:52:33.248176 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:33.624914 containerd[1598]: time="2025-11-05T15:52:33.624404222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:33.625895 containerd[1598]: time="2025-11-05T15:52:33.625669385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:52:33.625895 containerd[1598]: time="2025-11-05T15:52:33.625772891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:52:33.628065 kubelet[2769]: E1105 15:52:33.627859 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:52:33.628065 kubelet[2769]: E1105 15:52:33.627918 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:52:33.628362 containerd[1598]: time="2025-11-05T15:52:33.628317946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:52:33.628996 kubelet[2769]: E1105 15:52:33.628680 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8hsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-59557cc4f4-hjzvn_calico-system(1e405a49-8153-4577-b190-3b34d7fc5814): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:33.630150 kubelet[2769]: E1105 15:52:33.630095 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:52:33.987360 containerd[1598]: time="2025-11-05T15:52:33.986841081Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:33.989402 containerd[1598]: time="2025-11-05T15:52:33.989261705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:52:33.989402 containerd[1598]: time="2025-11-05T15:52:33.989321475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:52:33.989860 kubelet[2769]: E1105 15:52:33.989810 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:52:33.991664 kubelet[2769]: E1105 15:52:33.990044 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:52:33.991664 kubelet[2769]: E1105 15:52:33.990263 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn99s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-69q97_calico-system(be0a8e42-97b5-40e7-95d6-3baf83ea6dbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:33.992073 kubelet[2769]: E1105 15:52:33.992021 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:52:34.884888 containerd[1598]: time="2025-11-05T15:52:34.884716237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:52:35.310374 containerd[1598]: time="2025-11-05T15:52:35.309870180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:35.311741 containerd[1598]: time="2025-11-05T15:52:35.311679722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:52:35.312703 containerd[1598]: time="2025-11-05T15:52:35.311851323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:52:35.313059 kubelet[2769]: E1105 15:52:35.312999 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:35.313751 kubelet[2769]: E1105 15:52:35.313081 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:35.313751 kubelet[2769]: E1105 15:52:35.313251 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:65c13a29b5444a168af617f3adffab47,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:35.318052 containerd[1598]: time="2025-11-05T15:52:35.317795454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:52:35.423167 systemd[1]: Started sshd@14-143.110.239.237:22-139.178.68.195:58626.service - OpenSSH per-connection server daemon (139.178.68.195:58626). Nov 5 15:52:35.512289 sshd[4898]: Accepted publickey for core from 139.178.68.195 port 58626 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:35.517025 sshd-session[4898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:35.528093 systemd-logind[1574]: New session 15 of user core. Nov 5 15:52:35.533972 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:52:35.643275 containerd[1598]: time="2025-11-05T15:52:35.643128729Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:35.644156 containerd[1598]: time="2025-11-05T15:52:35.644091144Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:52:35.644278 containerd[1598]: time="2025-11-05T15:52:35.644223960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:52:35.645449 kubelet[2769]: E1105 15:52:35.645406 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:35.645608 kubelet[2769]: E1105 15:52:35.645592 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:35.645956 kubelet[2769]: E1105 15:52:35.645903 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2csh7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-67b9b594f7-vkfhk_calico-system(03cfcdc6-c1a2-47b4-849d-d54b33232d4e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:35.647654 kubelet[2769]: E1105 15:52:35.647586 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:52:35.750503 sshd[4901]: Connection closed by 139.178.68.195 port 58626 Nov 5 15:52:35.751339 sshd-session[4898]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:35.760211 systemd[1]: sshd@14-143.110.239.237:22-139.178.68.195:58626.service: Deactivated successfully. Nov 5 15:52:35.767482 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:52:35.770293 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:52:35.773604 systemd-logind[1574]: Removed session 15. Nov 5 15:52:35.890025 containerd[1598]: time="2025-11-05T15:52:35.889963117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:52:36.203657 containerd[1598]: time="2025-11-05T15:52:36.203581020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:36.204612 containerd[1598]: time="2025-11-05T15:52:36.204503445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:52:36.204851 containerd[1598]: time="2025-11-05T15:52:36.204624574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:52:36.205994 kubelet[2769]: E1105 15:52:36.205881 2769 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:36.205994 kubelet[2769]: E1105 15:52:36.205962 2769 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:52:36.206550 kubelet[2769]: E1105 15:52:36.206488 2769 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lg2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-69cd9bb6f5-kptv5_calico-apiserver(2171a61d-ffd4-4f1c-8106-ddf8826eef75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:36.207867 kubelet[2769]: E1105 15:52:36.207810 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:39.888584 kubelet[2769]: E1105 15:52:39.888164 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:52:40.771185 systemd[1]: Started sshd@15-143.110.239.237:22-139.178.68.195:58636.service - OpenSSH per-connection server daemon (139.178.68.195:58636). Nov 5 15:52:40.906545 sshd[4916]: Accepted publickey for core from 139.178.68.195 port 58636 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:40.909669 sshd-session[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:40.917695 systemd-logind[1574]: New session 16 of user core. Nov 5 15:52:40.926904 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:52:41.136759 sshd[4919]: Connection closed by 139.178.68.195 port 58636 Nov 5 15:52:41.135444 sshd-session[4916]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:41.147783 systemd[1]: sshd@15-143.110.239.237:22-139.178.68.195:58636.service: Deactivated successfully. Nov 5 15:52:41.152566 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:52:41.153800 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:52:41.159938 systemd[1]: Started sshd@16-143.110.239.237:22-139.178.68.195:58638.service - OpenSSH per-connection server daemon (139.178.68.195:58638). Nov 5 15:52:41.162710 systemd-logind[1574]: Removed session 16. Nov 5 15:52:41.227296 sshd[4931]: Accepted publickey for core from 139.178.68.195 port 58638 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:41.229268 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:41.241419 systemd-logind[1574]: New session 17 of user core. Nov 5 15:52:41.248957 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:52:41.653684 sshd[4934]: Connection closed by 139.178.68.195 port 58638 Nov 5 15:52:41.657098 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:41.668274 systemd[1]: sshd@16-143.110.239.237:22-139.178.68.195:58638.service: Deactivated successfully. Nov 5 15:52:41.671955 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:52:41.675830 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:52:41.682932 systemd[1]: Started sshd@17-143.110.239.237:22-139.178.68.195:58640.service - OpenSSH per-connection server daemon (139.178.68.195:58640). Nov 5 15:52:41.688792 systemd-logind[1574]: Removed session 17. Nov 5 15:52:41.839222 sshd[4944]: Accepted publickey for core from 139.178.68.195 port 58640 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:41.841388 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:41.847868 systemd-logind[1574]: New session 18 of user core. Nov 5 15:52:41.856087 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:52:42.889702 sshd[4949]: Connection closed by 139.178.68.195 port 58640 Nov 5 15:52:42.888926 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:42.909921 systemd[1]: sshd@17-143.110.239.237:22-139.178.68.195:58640.service: Deactivated successfully. Nov 5 15:52:42.917240 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:52:42.921773 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:52:42.927915 systemd-logind[1574]: Removed session 18. Nov 5 15:52:42.932237 systemd[1]: Started sshd@18-143.110.239.237:22-139.178.68.195:58650.service - OpenSSH per-connection server daemon (139.178.68.195:58650). Nov 5 15:52:43.068663 sshd[4965]: Accepted publickey for core from 139.178.68.195 port 58650 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:43.071548 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:43.080335 systemd-logind[1574]: New session 19 of user core. Nov 5 15:52:43.086915 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:52:43.542261 sshd[4969]: Connection closed by 139.178.68.195 port 58650 Nov 5 15:52:43.545073 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:43.556048 systemd[1]: sshd@18-143.110.239.237:22-139.178.68.195:58650.service: Deactivated successfully. Nov 5 15:52:43.558327 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:52:43.562368 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:52:43.565872 systemd[1]: Started sshd@19-143.110.239.237:22-139.178.68.195:41132.service - OpenSSH per-connection server daemon (139.178.68.195:41132). Nov 5 15:52:43.570704 systemd-logind[1574]: Removed session 19. Nov 5 15:52:43.666337 sshd[4979]: Accepted publickey for core from 139.178.68.195 port 41132 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:43.668750 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:43.677693 systemd-logind[1574]: New session 20 of user core. Nov 5 15:52:43.681695 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:52:43.883838 sshd[4982]: Connection closed by 139.178.68.195 port 41132 Nov 5 15:52:43.885054 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:43.893360 systemd[1]: sshd@19-143.110.239.237:22-139.178.68.195:41132.service: Deactivated successfully. Nov 5 15:52:43.898091 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:52:43.899809 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:52:43.903231 systemd-logind[1574]: Removed session 20. Nov 5 15:52:44.886398 kubelet[2769]: E1105 15:52:44.886327 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:52:46.886323 kubelet[2769]: E1105 15:52:46.885029 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:52:46.886323 kubelet[2769]: E1105 15:52:46.885270 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:48.797292 update_engine[1575]: I20251105 15:52:48.797165 1575 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 15:52:48.797292 update_engine[1575]: I20251105 15:52:48.797290 1575 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 15:52:48.798894 update_engine[1575]: I20251105 15:52:48.798830 1575 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 15:52:48.799301 update_engine[1575]: I20251105 15:52:48.799275 1575 omaha_request_params.cc:62] Current group set to alpha Nov 5 15:52:48.799464 update_engine[1575]: I20251105 15:52:48.799436 1575 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 15:52:48.799464 update_engine[1575]: I20251105 15:52:48.799458 1575 update_attempter.cc:643] Scheduling an action processor start. Nov 5 15:52:48.799620 update_engine[1575]: I20251105 15:52:48.799485 1575 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 15:52:48.799620 update_engine[1575]: I20251105 15:52:48.799551 1575 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 15:52:48.799703 update_engine[1575]: I20251105 15:52:48.799657 1575 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 15:52:48.799703 update_engine[1575]: I20251105 15:52:48.799669 1575 omaha_request_action.cc:272] Request: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: Nov 5 15:52:48.799703 update_engine[1575]: I20251105 15:52:48.799679 1575 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 15:52:48.841504 update_engine[1575]: I20251105 15:52:48.835939 1575 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 15:52:48.841504 update_engine[1575]: I20251105 15:52:48.836663 1575 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 15:52:48.842206 update_engine[1575]: E20251105 15:52:48.841912 1575 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 15:52:48.842206 update_engine[1575]: I20251105 15:52:48.842029 1575 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 15:52:48.860918 locksmithd[1614]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 15:52:48.889031 kubelet[2769]: E1105 15:52:48.888954 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb" Nov 5 15:52:48.907045 systemd[1]: Started sshd@20-143.110.239.237:22-139.178.68.195:41134.service - OpenSSH per-connection server daemon (139.178.68.195:41134). Nov 5 15:52:49.050133 sshd[4997]: Accepted publickey for core from 139.178.68.195 port 41134 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:49.052860 sshd-session[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:49.065757 systemd-logind[1574]: New session 21 of user core. Nov 5 15:52:49.071980 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:52:49.330231 sshd[5000]: Connection closed by 139.178.68.195 port 41134 Nov 5 15:52:49.329291 sshd-session[4997]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:49.337194 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:52:49.338210 systemd[1]: sshd@20-143.110.239.237:22-139.178.68.195:41134.service: Deactivated successfully. Nov 5 15:52:49.342508 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:52:49.346879 systemd-logind[1574]: Removed session 21. Nov 5 15:52:49.891462 kubelet[2769]: E1105 15:52:49.890272 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67b9b594f7-vkfhk" podUID="03cfcdc6-c1a2-47b4-849d-d54b33232d4e" Nov 5 15:52:52.775120 containerd[1598]: time="2025-11-05T15:52:52.775069081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0560bc3e3b1e93b40fe7ccc95be20f61447b4bc7505856110a6dd8ffcbd0aac\" id:\"4300849e668e3d081f9540b99a255cacd868f661149899668ef11cdb60d63cde\" pid:5023 exited_at:{seconds:1762357972 nanos:774566945}" Nov 5 15:52:54.354288 systemd[1]: Started sshd@21-143.110.239.237:22-139.178.68.195:37854.service - OpenSSH per-connection server daemon (139.178.68.195:37854). Nov 5 15:52:54.462780 sshd[5037]: Accepted publickey for core from 139.178.68.195 port 37854 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:54.466616 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:54.476445 systemd-logind[1574]: New session 22 of user core. Nov 5 15:52:54.481949 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:52:54.701728 sshd[5040]: Connection closed by 139.178.68.195 port 37854 Nov 5 15:52:54.703883 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:54.713905 systemd[1]: sshd@21-143.110.239.237:22-139.178.68.195:37854.service: Deactivated successfully. Nov 5 15:52:54.721304 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:52:54.723326 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:52:54.726860 systemd-logind[1574]: Removed session 22. Nov 5 15:52:54.886356 kubelet[2769]: E1105 15:52:54.886057 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-84dz5" podUID="d0421556-4619-489b-96b6-556923804205" Nov 5 15:52:54.889988 kubelet[2769]: E1105 15:52:54.889871 2769 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 5 15:52:57.890868 kubelet[2769]: E1105 15:52:57.890796 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-8xkrl" podUID="165fdd14-70f6-41d7-a608-5c88252d2d07" Nov 5 15:52:57.891804 kubelet[2769]: E1105 15:52:57.891721 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69cd9bb6f5-kptv5" podUID="2171a61d-ffd4-4f1c-8106-ddf8826eef75" Nov 5 15:52:58.715817 update_engine[1575]: I20251105 15:52:58.715718 1575 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 15:52:58.715817 update_engine[1575]: I20251105 15:52:58.715855 1575 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 15:52:58.716606 update_engine[1575]: I20251105 15:52:58.716368 1575 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 15:52:58.777471 update_engine[1575]: E20251105 15:52:58.777349 1575 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 15:52:58.777689 update_engine[1575]: I20251105 15:52:58.777488 1575 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 15:52:59.724047 systemd[1]: Started sshd@22-143.110.239.237:22-139.178.68.195:37864.service - OpenSSH per-connection server daemon (139.178.68.195:37864). Nov 5 15:52:59.806491 sshd[5052]: Accepted publickey for core from 139.178.68.195 port 37864 ssh2: RSA SHA256:6pZ2eqROk+ALbQ+c/ul+tfC2zt1KpSHiHdkR7HgdI30 Nov 5 15:52:59.810178 sshd-session[5052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:59.825448 systemd-logind[1574]: New session 23 of user core. Nov 5 15:52:59.831402 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:53:00.097235 sshd[5055]: Connection closed by 139.178.68.195 port 37864 Nov 5 15:53:00.098324 sshd-session[5052]: pam_unix(sshd:session): session closed for user core Nov 5 15:53:00.107862 systemd[1]: sshd@22-143.110.239.237:22-139.178.68.195:37864.service: Deactivated successfully. Nov 5 15:53:00.114017 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:53:00.115779 systemd-logind[1574]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:53:00.118005 systemd-logind[1574]: Removed session 23. Nov 5 15:53:00.885358 kubelet[2769]: E1105 15:53:00.885307 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-59557cc4f4-hjzvn" podUID="1e405a49-8153-4577-b190-3b34d7fc5814" Nov 5 15:53:01.888628 kubelet[2769]: E1105 15:53:01.888560 2769 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-69q97" podUID="be0a8e42-97b5-40e7-95d6-3baf83ea6dbb"