Nov 8 00:20:07.042697 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:20:07.042742 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:07.042762 kernel: BIOS-provided physical RAM map: Nov 8 00:20:07.042774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:20:07.042786 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:20:07.042798 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:20:07.042813 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Nov 8 00:20:07.042825 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Nov 8 00:20:07.042838 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:20:07.042855 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:20:07.042868 kernel: NX (Execute Disable) protection: active Nov 8 00:20:07.042880 kernel: APIC: Static calls initialized Nov 8 00:20:07.042898 kernel: SMBIOS 2.8 present. Nov 8 00:20:07.042911 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Nov 8 00:20:07.042927 kernel: Hypervisor detected: KVM Nov 8 00:20:07.042946 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:20:07.042963 kernel: kvm-clock: using sched offset of 3693080068 cycles Nov 8 00:20:07.042989 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:20:07.043001 kernel: tsc: Detected 2494.140 MHz processor Nov 8 00:20:07.043013 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:20:07.043026 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:20:07.043039 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Nov 8 00:20:07.043054 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:20:07.043068 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:20:07.043088 kernel: ACPI: Early table checksum verification disabled Nov 8 00:20:07.043122 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Nov 8 00:20:07.043138 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043152 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043167 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043181 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:20:07.043195 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043209 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043224 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043244 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:20:07.043258 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Nov 8 00:20:07.043272 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Nov 8 00:20:07.043286 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:20:07.043300 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Nov 8 00:20:07.043315 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Nov 8 00:20:07.043330 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Nov 8 00:20:07.043355 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Nov 8 00:20:07.043370 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:20:07.043385 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:20:07.043401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:20:07.043415 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:20:07.043436 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Nov 8 00:20:07.043452 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Nov 8 00:20:07.043473 kernel: Zone ranges: Nov 8 00:20:07.043488 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:20:07.043503 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Nov 8 00:20:07.043519 kernel: Normal empty Nov 8 00:20:07.043534 kernel: Movable zone start for each node Nov 8 00:20:07.043549 kernel: Early memory node ranges Nov 8 00:20:07.043565 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:20:07.043579 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Nov 8 00:20:07.043595 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Nov 8 00:20:07.043615 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:20:07.043630 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:20:07.043649 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Nov 8 00:20:07.043665 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:20:07.043680 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:20:07.043695 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:20:07.043710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:20:07.043726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:20:07.043741 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:20:07.043761 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:20:07.043776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:20:07.043792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:20:07.043806 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:20:07.043821 kernel: TSC deadline timer available Nov 8 00:20:07.043836 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:20:07.043851 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:20:07.043867 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Nov 8 00:20:07.043886 kernel: Booting paravirtualized kernel on KVM Nov 8 00:20:07.043902 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:20:07.043922 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:20:07.043938 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:20:07.043953 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:20:07.043967 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:20:07.043982 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:20:07.044001 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:07.044016 kernel: random: crng init done Nov 8 00:20:07.044031 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:20:07.044051 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:20:07.044066 kernel: Fallback order for Node 0: 0 Nov 8 00:20:07.044082 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Nov 8 00:20:07.044097 kernel: Policy zone: DMA32 Nov 8 00:20:07.044142 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:20:07.044158 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125148K reserved, 0K cma-reserved) Nov 8 00:20:07.044173 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:20:07.044188 kernel: Kernel/User page tables isolation: enabled Nov 8 00:20:07.044210 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:20:07.044226 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:20:07.044241 kernel: Dynamic Preempt: voluntary Nov 8 00:20:07.044256 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:20:07.044272 kernel: rcu: RCU event tracing is enabled. Nov 8 00:20:07.044288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:20:07.044303 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:20:07.044318 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:20:07.044334 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:20:07.044349 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:20:07.044369 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:20:07.044384 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:20:07.044399 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:20:07.044414 kernel: Console: colour VGA+ 80x25 Nov 8 00:20:07.044435 kernel: printk: console [tty0] enabled Nov 8 00:20:07.044451 kernel: printk: console [ttyS0] enabled Nov 8 00:20:07.044466 kernel: ACPI: Core revision 20230628 Nov 8 00:20:07.044482 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:20:07.044495 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:20:07.044517 kernel: x2apic enabled Nov 8 00:20:07.044533 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:20:07.044548 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:20:07.044564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 8 00:20:07.044594 kernel: Calibrating delay loop (skipped) preset value.. 4988.28 BogoMIPS (lpj=2494140) Nov 8 00:20:07.044629 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 8 00:20:07.044646 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 8 00:20:07.044662 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:20:07.044697 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:20:07.044713 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:20:07.044729 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:20:07.044749 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:20:07.044765 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:20:07.044782 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:20:07.044797 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:20:07.044814 kernel: active return thunk: its_return_thunk Nov 8 00:20:07.044836 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:20:07.044857 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:20:07.044874 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:20:07.044890 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:20:07.044906 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:20:07.044923 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:20:07.044940 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:20:07.044957 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:20:07.044973 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:20:07.044993 kernel: landlock: Up and running. Nov 8 00:20:07.045009 kernel: SELinux: Initializing. Nov 8 00:20:07.045026 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:20:07.045041 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:20:07.045058 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Nov 8 00:20:07.045075 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:20:07.045092 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:20:07.045124 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:20:07.045141 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Nov 8 00:20:07.045163 kernel: signal: max sigframe size: 1776 Nov 8 00:20:07.045179 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:20:07.045196 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:20:07.045213 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:20:07.045229 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:20:07.045246 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:20:07.045262 kernel: .... node #0, CPUs: #1 Nov 8 00:20:07.045278 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:20:07.045298 kernel: smpboot: Max logical packages: 1 Nov 8 00:20:07.045320 kernel: smpboot: Total of 2 processors activated (9976.56 BogoMIPS) Nov 8 00:20:07.045337 kernel: devtmpfs: initialized Nov 8 00:20:07.045352 kernel: x86/mm: Memory block size: 128MB Nov 8 00:20:07.045369 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:20:07.045386 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:20:07.045403 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:20:07.045420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:20:07.045437 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:20:07.045453 kernel: audit: type=2000 audit(1762561206.040:1): state=initialized audit_enabled=0 res=1 Nov 8 00:20:07.045475 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:20:07.045491 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:20:07.045508 kernel: cpuidle: using governor menu Nov 8 00:20:07.045524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:20:07.045541 kernel: dca service started, version 1.12.1 Nov 8 00:20:07.045558 kernel: PCI: Using configuration type 1 for base access Nov 8 00:20:07.045575 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:20:07.045593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:20:07.045609 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:20:07.045631 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:20:07.045647 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:20:07.045664 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:20:07.045681 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:20:07.045697 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:20:07.045714 kernel: ACPI: Interpreter enabled Nov 8 00:20:07.045730 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:20:07.045747 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:20:07.045763 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:20:07.045784 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:20:07.045801 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 8 00:20:07.045817 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:20:07.047324 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:20:07.047477 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 8 00:20:07.047578 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 8 00:20:07.047591 kernel: acpiphp: Slot [3] registered Nov 8 00:20:07.047608 kernel: acpiphp: Slot [4] registered Nov 8 00:20:07.047618 kernel: acpiphp: Slot [5] registered Nov 8 00:20:07.047627 kernel: acpiphp: Slot [6] registered Nov 8 00:20:07.047636 kernel: acpiphp: Slot [7] registered Nov 8 00:20:07.047644 kernel: acpiphp: Slot [8] registered Nov 8 00:20:07.047658 kernel: acpiphp: Slot [9] registered Nov 8 00:20:07.047671 kernel: acpiphp: Slot [10] registered Nov 8 00:20:07.047685 kernel: acpiphp: Slot [11] registered Nov 8 00:20:07.047699 kernel: acpiphp: Slot [12] registered Nov 8 00:20:07.047718 kernel: acpiphp: Slot [13] registered Nov 8 00:20:07.047730 kernel: acpiphp: Slot [14] registered Nov 8 00:20:07.047743 kernel: acpiphp: Slot [15] registered Nov 8 00:20:07.047755 kernel: acpiphp: Slot [16] registered Nov 8 00:20:07.047769 kernel: acpiphp: Slot [17] registered Nov 8 00:20:07.047782 kernel: acpiphp: Slot [18] registered Nov 8 00:20:07.047796 kernel: acpiphp: Slot [19] registered Nov 8 00:20:07.047809 kernel: acpiphp: Slot [20] registered Nov 8 00:20:07.047821 kernel: acpiphp: Slot [21] registered Nov 8 00:20:07.047834 kernel: acpiphp: Slot [22] registered Nov 8 00:20:07.047852 kernel: acpiphp: Slot [23] registered Nov 8 00:20:07.047864 kernel: acpiphp: Slot [24] registered Nov 8 00:20:07.047877 kernel: acpiphp: Slot [25] registered Nov 8 00:20:07.047891 kernel: acpiphp: Slot [26] registered Nov 8 00:20:07.047902 kernel: acpiphp: Slot [27] registered Nov 8 00:20:07.047919 kernel: acpiphp: Slot [28] registered Nov 8 00:20:07.047931 kernel: acpiphp: Slot [29] registered Nov 8 00:20:07.047943 kernel: acpiphp: Slot [30] registered Nov 8 00:20:07.047955 kernel: acpiphp: Slot [31] registered Nov 8 00:20:07.047973 kernel: PCI host bridge to bus 0000:00 Nov 8 00:20:07.048896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:20:07.049012 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:20:07.049121 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:20:07.050151 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 8 00:20:07.050277 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Nov 8 00:20:07.050369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:20:07.050553 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 8 00:20:07.050669 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 8 00:20:07.050781 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Nov 8 00:20:07.050882 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Nov 8 00:20:07.051022 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Nov 8 00:20:07.052260 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Nov 8 00:20:07.052393 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Nov 8 00:20:07.052559 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Nov 8 00:20:07.052728 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Nov 8 00:20:07.054185 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Nov 8 00:20:07.054362 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 8 00:20:07.054555 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Nov 8 00:20:07.054714 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Nov 8 00:20:07.054936 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:20:07.055538 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Nov 8 00:20:07.055669 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Nov 8 00:20:07.055770 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Nov 8 00:20:07.055868 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Nov 8 00:20:07.055966 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:20:07.057222 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:20:07.057380 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Nov 8 00:20:07.057483 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Nov 8 00:20:07.057581 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Nov 8 00:20:07.057702 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:20:07.057803 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Nov 8 00:20:07.057902 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Nov 8 00:20:07.058007 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Nov 8 00:20:07.059201 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:20:07.059344 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Nov 8 00:20:07.059446 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Nov 8 00:20:07.059543 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Nov 8 00:20:07.059655 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:20:07.059752 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:20:07.059886 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Nov 8 00:20:07.059986 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Nov 8 00:20:07.060097 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:20:07.060226 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Nov 8 00:20:07.060326 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Nov 8 00:20:07.060425 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Nov 8 00:20:07.060538 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Nov 8 00:20:07.060649 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Nov 8 00:20:07.060759 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Nov 8 00:20:07.060772 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:20:07.060782 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:20:07.060791 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:20:07.060800 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:20:07.060809 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 8 00:20:07.060819 kernel: iommu: Default domain type: Translated Nov 8 00:20:07.060833 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:20:07.060842 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:20:07.060852 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:20:07.060861 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:20:07.060870 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Nov 8 00:20:07.060974 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Nov 8 00:20:07.061078 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Nov 8 00:20:07.063322 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:20:07.063366 kernel: vgaarb: loaded Nov 8 00:20:07.063376 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:20:07.063387 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:20:07.063396 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:20:07.063405 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:20:07.063415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:20:07.063423 kernel: pnp: PnP ACPI init Nov 8 00:20:07.063433 kernel: pnp: PnP ACPI: found 4 devices Nov 8 00:20:07.063442 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:20:07.063455 kernel: NET: Registered PF_INET protocol family Nov 8 00:20:07.063465 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:20:07.063474 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:20:07.063483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:20:07.063493 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:20:07.063502 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:20:07.063511 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:20:07.063520 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:20:07.063529 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:20:07.063542 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:20:07.063551 kernel: NET: Registered PF_XDP protocol family Nov 8 00:20:07.063657 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:20:07.063751 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:20:07.063840 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:20:07.063927 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 8 00:20:07.064013 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Nov 8 00:20:07.066214 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Nov 8 00:20:07.066385 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 8 00:20:07.066401 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 8 00:20:07.066504 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 41856 usecs Nov 8 00:20:07.066517 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:20:07.066527 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:20:07.066537 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39a1d859, max_idle_ns: 440795326830 ns Nov 8 00:20:07.066546 kernel: Initialise system trusted keyrings Nov 8 00:20:07.066556 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:20:07.066571 kernel: Key type asymmetric registered Nov 8 00:20:07.066581 kernel: Asymmetric key parser 'x509' registered Nov 8 00:20:07.066590 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:20:07.066600 kernel: io scheduler mq-deadline registered Nov 8 00:20:07.066609 kernel: io scheduler kyber registered Nov 8 00:20:07.066619 kernel: io scheduler bfq registered Nov 8 00:20:07.066628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:20:07.066637 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Nov 8 00:20:07.066646 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 8 00:20:07.066656 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 8 00:20:07.066670 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:20:07.066679 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:20:07.066688 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:20:07.066698 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:20:07.066707 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:20:07.066716 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:20:07.066863 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:20:07.066962 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:20:07.067096 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:20:06 UTC (1762561206) Nov 8 00:20:07.067222 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 8 00:20:07.067234 kernel: intel_pstate: CPU model not supported Nov 8 00:20:07.067244 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:20:07.067253 kernel: Segment Routing with IPv6 Nov 8 00:20:07.067262 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:20:07.067271 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:20:07.067280 kernel: Key type dns_resolver registered Nov 8 00:20:07.067297 kernel: IPI shorthand broadcast: enabled Nov 8 00:20:07.067307 kernel: sched_clock: Marking stable (1210001992, 144166245)->(1513949513, -159781276) Nov 8 00:20:07.067316 kernel: registered taskstats version 1 Nov 8 00:20:07.067325 kernel: Loading compiled-in X.509 certificates Nov 8 00:20:07.067335 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:20:07.067343 kernel: Key type .fscrypt registered Nov 8 00:20:07.067353 kernel: Key type fscrypt-provisioning registered Nov 8 00:20:07.067362 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:20:07.067371 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:20:07.067384 kernel: ima: No architecture policies found Nov 8 00:20:07.067393 kernel: clk: Disabling unused clocks Nov 8 00:20:07.067403 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:20:07.067412 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:20:07.067421 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:20:07.067453 kernel: Run /init as init process Nov 8 00:20:07.067466 kernel: with arguments: Nov 8 00:20:07.067476 kernel: /init Nov 8 00:20:07.067485 kernel: with environment: Nov 8 00:20:07.067494 kernel: HOME=/ Nov 8 00:20:07.067507 kernel: TERM=linux Nov 8 00:20:07.067519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:07.067532 systemd[1]: Detected virtualization kvm. Nov 8 00:20:07.067542 systemd[1]: Detected architecture x86-64. Nov 8 00:20:07.067552 systemd[1]: Running in initrd. Nov 8 00:20:07.067562 systemd[1]: No hostname configured, using default hostname. Nov 8 00:20:07.067571 systemd[1]: Hostname set to . Nov 8 00:20:07.067585 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:07.067595 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:20:07.067605 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:07.067615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:07.067625 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:20:07.067636 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:07.067646 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:20:07.067656 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:20:07.067671 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:20:07.067680 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:20:07.067691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:07.067701 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:07.067710 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:07.067720 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:07.067730 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:07.067743 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:07.067753 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:07.067763 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:07.067773 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:20:07.067783 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:20:07.067804 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:07.067818 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:07.067836 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:07.067850 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:07.067865 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:20:07.067879 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:07.067892 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:20:07.067906 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:20:07.067921 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:07.067941 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:07.067955 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:07.067970 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:07.068022 systemd-journald[185]: Collecting audit messages is disabled. Nov 8 00:20:07.068054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:07.068064 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:20:07.068076 systemd-journald[185]: Journal started Nov 8 00:20:07.069136 systemd-journald[185]: Runtime Journal (/run/log/journal/c84c2f39de5d433b934969935b3ae2d1) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:20:07.048308 systemd-modules-load[186]: Inserted module 'overlay' Nov 8 00:20:07.079145 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:20:07.086154 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:20:07.087852 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 8 00:20:07.132953 kernel: Bridge firewalling registered Nov 8 00:20:07.141215 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:07.141890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:07.143143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:07.155478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:07.159471 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:07.164652 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:07.167748 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:20:07.185458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:07.203158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:07.207683 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:07.216542 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:20:07.217467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:07.220212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:07.230573 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:07.240281 dracut-cmdline[217]: dracut-dracut-053 Nov 8 00:20:07.245346 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:20:07.291849 systemd-resolved[221]: Positive Trust Anchors: Nov 8 00:20:07.291879 systemd-resolved[221]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:07.291932 systemd-resolved[221]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:07.298123 systemd-resolved[221]: Defaulting to hostname 'linux'. Nov 8 00:20:07.300494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:07.301488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:07.363155 kernel: SCSI subsystem initialized Nov 8 00:20:07.374157 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:20:07.386149 kernel: iscsi: registered transport (tcp) Nov 8 00:20:07.414227 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:20:07.414351 kernel: QLogic iSCSI HBA Driver Nov 8 00:20:07.472608 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:07.480528 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:20:07.514508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:20:07.514638 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:20:07.516288 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:20:07.564194 kernel: raid6: avx2x4 gen() 16330 MB/s Nov 8 00:20:07.582176 kernel: raid6: avx2x2 gen() 16224 MB/s Nov 8 00:20:07.600539 kernel: raid6: avx2x1 gen() 12875 MB/s Nov 8 00:20:07.600667 kernel: raid6: using algorithm avx2x4 gen() 16330 MB/s Nov 8 00:20:07.619547 kernel: raid6: .... xor() 4758 MB/s, rmw enabled Nov 8 00:20:07.619653 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:20:07.650164 kernel: xor: automatically using best checksumming function avx Nov 8 00:20:07.877152 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:20:07.896033 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:07.903357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:07.928531 systemd-udevd[403]: Using default interface naming scheme 'v255'. Nov 8 00:20:07.933990 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:07.942377 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:20:07.962800 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 8 00:20:08.006218 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:08.012388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:08.082891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:08.092603 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:20:08.119243 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:08.122256 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:08.123628 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:08.125737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:08.132477 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:20:08.164544 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:08.171137 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Nov 8 00:20:08.178245 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:20:08.181360 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 8 00:20:08.198374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:20:08.198624 kernel: GPT:9289727 != 125829119 Nov 8 00:20:08.199180 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:20:08.201865 kernel: GPT:9289727 != 125829119 Nov 8 00:20:08.202039 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:20:08.204596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:08.228132 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:20:08.230266 kernel: ACPI: bus type USB registered Nov 8 00:20:08.230337 kernel: usbcore: registered new interface driver usbfs Nov 8 00:20:08.232064 kernel: usbcore: registered new interface driver hub Nov 8 00:20:08.238157 kernel: usbcore: registered new device driver usb Nov 8 00:20:08.240165 kernel: libata version 3.00 loaded. Nov 8 00:20:08.244161 kernel: ata_piix 0000:00:01.1: version 2.13 Nov 8 00:20:08.253140 kernel: scsi host1: ata_piix Nov 8 00:20:08.256139 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Nov 8 00:20:08.260356 kernel: scsi host2: ata_piix Nov 8 00:20:08.260706 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Nov 8 00:20:08.260733 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Nov 8 00:20:08.269138 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Nov 8 00:20:08.279906 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:20:08.280007 kernel: AES CTR mode by8 optimization enabled Nov 8 00:20:08.283592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:08.283779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:08.287262 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:08.288158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:08.288402 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:08.291322 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:08.300661 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:08.381204 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:08.387481 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:20:08.417860 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:08.478231 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Nov 8 00:20:08.478638 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Nov 8 00:20:08.480252 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Nov 8 00:20:08.482632 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Nov 8 00:20:08.485692 kernel: hub 1-0:1.0: USB hub found Nov 8 00:20:08.486002 kernel: hub 1-0:1.0: 2 ports detected Nov 8 00:20:08.486961 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:20:08.496176 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (448) Nov 8 00:20:08.502159 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Nov 8 00:20:08.508922 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:20:08.524819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:08.537266 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:20:08.538124 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:20:08.555422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:20:08.562854 disk-uuid[548]: Primary Header is updated. Nov 8 00:20:08.562854 disk-uuid[548]: Secondary Entries is updated. Nov 8 00:20:08.562854 disk-uuid[548]: Secondary Header is updated. Nov 8 00:20:08.573145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:08.585135 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:08.591136 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:09.592393 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:20:09.592464 disk-uuid[549]: The operation has completed successfully. Nov 8 00:20:09.641925 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:20:09.642160 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:20:09.656332 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:20:09.674460 sh[562]: Success Nov 8 00:20:09.691306 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:20:09.765987 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:20:09.767458 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:20:09.769706 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:20:09.798407 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:20:09.798486 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:09.801136 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:20:09.801216 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:20:09.803386 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:20:09.810889 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:20:09.812170 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:20:09.822375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:20:09.826328 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:20:09.845262 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:09.845342 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:09.845357 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:09.850125 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:09.866488 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:09.866030 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:20:09.874271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:20:09.880931 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:20:09.961260 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:09.969341 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:09.996863 systemd-networkd[744]: lo: Link UP Nov 8 00:20:09.997147 systemd-networkd[744]: lo: Gained carrier Nov 8 00:20:10.000812 systemd-networkd[744]: Enumeration completed Nov 8 00:20:10.001318 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:20:10.001322 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Nov 8 00:20:10.002199 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:10.003419 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:10.003425 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:20:10.004873 systemd-networkd[744]: eth0: Link UP Nov 8 00:20:10.004879 systemd-networkd[744]: eth0: Gained carrier Nov 8 00:20:10.004892 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Nov 8 00:20:10.005289 systemd[1]: Reached target network.target - Network. Nov 8 00:20:10.014486 systemd-networkd[744]: eth1: Link UP Nov 8 00:20:10.014494 systemd-networkd[744]: eth1: Gained carrier Nov 8 00:20:10.014510 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:20:10.026086 ignition[663]: Ignition 2.19.0 Nov 8 00:20:10.026119 ignition[663]: Stage: fetch-offline Nov 8 00:20:10.026170 ignition[663]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.026184 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.029419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:10.026326 ignition[663]: parsed url from cmdline: "" Nov 8 00:20:10.031208 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.33/20 acquired from 169.254.169.253 Nov 8 00:20:10.026331 ignition[663]: no config URL provided Nov 8 00:20:10.026339 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:20:10.026352 ignition[663]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:20:10.026364 ignition[663]: failed to fetch config: resource requires networking Nov 8 00:20:10.026665 ignition[663]: Ignition finished successfully Nov 8 00:20:10.036214 systemd-networkd[744]: eth0: DHCPv4 address 64.23.225.39/20, gateway 64.23.224.1 acquired from 169.254.169.253 Nov 8 00:20:10.039987 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:20:10.064849 ignition[753]: Ignition 2.19.0 Nov 8 00:20:10.065577 ignition[753]: Stage: fetch Nov 8 00:20:10.065802 ignition[753]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.065813 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.065937 ignition[753]: parsed url from cmdline: "" Nov 8 00:20:10.065941 ignition[753]: no config URL provided Nov 8 00:20:10.065946 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:20:10.065954 ignition[753]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:20:10.065976 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Nov 8 00:20:10.078082 ignition[753]: GET result: OK Nov 8 00:20:10.078372 ignition[753]: parsing config with SHA512: 5ccc41bdc81213edbbd24120c2058710a6110b685ae11af0908beffacaae0f2e9ce227b47d151d58fdf2526ebcfb508057bcb082af091e8fe3577013171fb694 Nov 8 00:20:10.085193 unknown[753]: fetched base config from "system" Nov 8 00:20:10.085205 unknown[753]: fetched base config from "system" Nov 8 00:20:10.085753 ignition[753]: fetch: fetch complete Nov 8 00:20:10.085213 unknown[753]: fetched user config from "digitalocean" Nov 8 00:20:10.085760 ignition[753]: fetch: fetch passed Nov 8 00:20:10.087738 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:20:10.085822 ignition[753]: Ignition finished successfully Nov 8 00:20:10.095398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:20:10.115671 ignition[760]: Ignition 2.19.0 Nov 8 00:20:10.115682 ignition[760]: Stage: kargs Nov 8 00:20:10.115901 ignition[760]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.115913 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.116795 ignition[760]: kargs: kargs passed Nov 8 00:20:10.116845 ignition[760]: Ignition finished successfully Nov 8 00:20:10.119304 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:20:10.126324 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:20:10.146343 ignition[766]: Ignition 2.19.0 Nov 8 00:20:10.146358 ignition[766]: Stage: disks Nov 8 00:20:10.146555 ignition[766]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.149470 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:20:10.146570 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.150699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:10.147642 ignition[766]: disks: disks passed Nov 8 00:20:10.157618 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:20:10.147698 ignition[766]: Ignition finished successfully Nov 8 00:20:10.158781 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:10.159692 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:10.160799 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:10.169337 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:20:10.187282 systemd-fsck[774]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:20:10.190812 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:20:10.199251 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:20:10.305149 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:20:10.306157 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:20:10.307328 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:20:10.317314 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:10.320146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:20:10.321950 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Nov 8 00:20:10.331146 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (782) Nov 8 00:20:10.332306 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:20:10.332912 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:20:10.332950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:10.341479 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:10.341511 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:10.341524 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:10.338193 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:20:10.350802 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:10.353790 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:20:10.359485 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:10.415943 coreos-metadata[785]: Nov 08 00:20:10.415 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:10.425140 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:20:10.430219 coreos-metadata[785]: Nov 08 00:20:10.430 INFO Fetch successful Nov 8 00:20:10.436278 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:20:10.437381 coreos-metadata[784]: Nov 08 00:20:10.436 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:10.441698 coreos-metadata[785]: Nov 08 00:20:10.441 INFO wrote hostname ci-4081.3.6-n-f4234a6c60 to /sysroot/etc/hostname Nov 8 00:20:10.444474 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:20:10.448120 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:20:10.450491 coreos-metadata[784]: Nov 08 00:20:10.450 INFO Fetch successful Nov 8 00:20:10.453928 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:20:10.461015 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Nov 8 00:20:10.461147 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Nov 8 00:20:10.555428 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:10.567602 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:20:10.573399 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:20:10.583138 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:10.612266 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:20:10.622127 ignition[904]: INFO : Ignition 2.19.0 Nov 8 00:20:10.622127 ignition[904]: INFO : Stage: mount Nov 8 00:20:10.622127 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.622127 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.625419 ignition[904]: INFO : mount: mount passed Nov 8 00:20:10.625419 ignition[904]: INFO : Ignition finished successfully Nov 8 00:20:10.625908 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:20:10.632274 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:20:10.797275 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:20:10.800362 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:20:10.815166 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (917) Nov 8 00:20:10.818926 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:20:10.819074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:20:10.819091 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:20:10.824152 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:20:10.826411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:20:10.858273 ignition[934]: INFO : Ignition 2.19.0 Nov 8 00:20:10.858273 ignition[934]: INFO : Stage: files Nov 8 00:20:10.859715 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:10.859715 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:10.861538 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:20:10.861538 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:20:10.861538 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:20:10.865035 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:20:10.865946 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:20:10.866668 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:20:10.866550 unknown[934]: wrote ssh authorized keys file for user: core Nov 8 00:20:10.870592 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:20:10.871669 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:20:11.037380 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:11.134855 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:11.141769 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:20:11.270382 systemd-networkd[744]: eth0: Gained IPv6LL Nov 8 00:20:11.585803 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:20:11.911528 systemd-networkd[744]: eth1: Gained IPv6LL Nov 8 00:20:11.919133 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:20:11.920401 ignition[934]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:20:11.921194 ignition[934]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:11.922014 ignition[934]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:20:11.922014 ignition[934]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:20:11.922014 ignition[934]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:11.922014 ignition[934]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:20:11.926199 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:11.926199 ignition[934]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:20:11.926199 ignition[934]: INFO : files: files passed Nov 8 00:20:11.926199 ignition[934]: INFO : Ignition finished successfully Nov 8 00:20:11.924687 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:20:11.931574 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:20:11.936146 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:20:11.940460 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:20:11.940619 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:20:11.972162 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:11.974286 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:11.975381 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:20:11.975351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:11.977016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:20:11.985503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:20:12.049309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:20:12.049454 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:20:12.050880 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:20:12.051924 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:20:12.052974 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:20:12.058394 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:20:12.075454 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:12.083340 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:20:12.096631 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:12.097484 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:12.098634 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:20:12.099747 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:20:12.099984 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:20:12.101188 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:20:12.102407 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:20:12.103567 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:20:12.104487 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:20:12.105659 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:20:12.106785 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:20:12.107887 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:20:12.109080 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:20:12.110170 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:20:12.111275 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:20:12.112239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:20:12.112453 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:20:12.113683 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:12.115051 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:12.116037 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:20:12.116238 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:12.117190 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:20:12.117415 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:20:12.118606 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:20:12.118846 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:20:12.120148 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:20:12.120309 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:20:12.121185 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:20:12.121345 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:20:12.133531 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:20:12.135561 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:20:12.135812 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:12.140534 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:20:12.141429 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:20:12.142177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:12.143542 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:20:12.143993 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:20:12.157426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:20:12.157598 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:20:12.163064 ignition[987]: INFO : Ignition 2.19.0 Nov 8 00:20:12.163064 ignition[987]: INFO : Stage: umount Nov 8 00:20:12.163064 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:20:12.163064 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Nov 8 00:20:12.166850 ignition[987]: INFO : umount: umount passed Nov 8 00:20:12.166850 ignition[987]: INFO : Ignition finished successfully Nov 8 00:20:12.168162 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:20:12.168395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:20:12.169998 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:20:12.170468 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:20:12.171522 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:20:12.171592 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:20:12.177789 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:20:12.177877 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:20:12.178679 systemd[1]: Stopped target network.target - Network. Nov 8 00:20:12.179582 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:20:12.179696 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:20:12.181802 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:20:12.183930 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:20:12.184117 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:12.184948 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:20:12.186490 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:20:12.189203 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:20:12.189274 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:20:12.190437 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:20:12.190505 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:20:12.192577 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:20:12.192666 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:20:12.193714 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:20:12.193792 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:20:12.197346 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:20:12.198412 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:20:12.202508 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:20:12.202600 systemd-networkd[744]: eth1: DHCPv6 lease lost Nov 8 00:20:12.206543 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:20:12.206698 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:20:12.207176 systemd-networkd[744]: eth0: DHCPv6 lease lost Nov 8 00:20:12.208472 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:20:12.208613 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:20:12.210762 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:20:12.211325 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:20:12.213751 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:20:12.216032 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:20:12.218715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:20:12.218765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:12.225281 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:20:12.225933 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:20:12.226025 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:20:12.226740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:20:12.226808 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:12.228238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:20:12.228291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:12.229455 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:20:12.229499 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:12.230758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:12.246406 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:20:12.247204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:20:12.250540 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:20:12.250757 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:12.252635 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:20:12.252733 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:12.253868 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:20:12.253908 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:12.254932 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:20:12.255031 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:20:12.256517 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:20:12.256569 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:20:12.257456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:20:12.257502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:20:12.263443 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:20:12.264132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:20:12.264221 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:12.265526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:12.265599 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:12.275187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:20:12.276271 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:20:12.278695 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:20:12.286395 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:20:12.298065 systemd[1]: Switching root. Nov 8 00:20:12.346430 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 8 00:20:12.346528 systemd-journald[185]: Journal stopped Nov 8 00:20:13.615460 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:20:13.615577 kernel: SELinux: policy capability open_perms=1 Nov 8 00:20:13.615601 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:20:13.615620 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:20:13.615644 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:20:13.615663 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:20:13.615685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:20:13.615723 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:20:13.615744 kernel: audit: type=1403 audit(1762561212.507:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:20:13.615774 systemd[1]: Successfully loaded SELinux policy in 45.330ms. Nov 8 00:20:13.615818 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.602ms. Nov 8 00:20:13.615841 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:20:13.615863 systemd[1]: Detected virtualization kvm. Nov 8 00:20:13.615882 systemd[1]: Detected architecture x86-64. Nov 8 00:20:13.615901 systemd[1]: Detected first boot. Nov 8 00:20:13.615922 systemd[1]: Hostname set to . Nov 8 00:20:13.615949 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:20:13.615971 zram_generator::config[1033]: No configuration found. Nov 8 00:20:13.616001 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:20:13.616024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:20:13.616046 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:20:13.616069 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:13.616093 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:20:13.617194 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:20:13.617249 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:20:13.617273 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:20:13.617296 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:20:13.617317 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:20:13.617340 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:20:13.617357 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:20:13.617374 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:20:13.617392 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:20:13.617410 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:20:13.617439 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:20:13.617461 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:20:13.617481 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:20:13.617501 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:20:13.617521 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:20:13.617543 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:20:13.617563 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:20:13.617592 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:20:13.617617 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:20:13.617640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:20:13.617662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:20:13.617684 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:20:13.617705 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:20:13.617726 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:20:13.617746 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:20:13.617775 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:20:13.617797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:20:13.617818 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:20:13.617839 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:20:13.617860 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:20:13.617882 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:20:13.617903 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:20:13.617924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:13.617946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:20:13.617975 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:20:13.617997 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:20:13.618020 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:20:13.618041 systemd[1]: Reached target machines.target - Containers. Nov 8 00:20:13.618061 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:20:13.618082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:13.618120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:20:13.618145 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:20:13.618173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:13.618195 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:13.618220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:13.618242 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:20:13.618264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:13.618286 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:20:13.618310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:20:13.618332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:20:13.618355 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:20:13.618383 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:20:13.618405 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:20:13.618427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:20:13.618449 kernel: fuse: init (API version 7.39) Nov 8 00:20:13.618473 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:20:13.618495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:20:13.618517 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:20:13.618539 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:20:13.618560 systemd[1]: Stopped verity-setup.service. Nov 8 00:20:13.618607 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:13.618630 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:20:13.618650 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:20:13.618672 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:20:13.618698 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:20:13.618724 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:20:13.618746 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:20:13.618768 kernel: ACPI: bus type drm_connector registered Nov 8 00:20:13.618788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:20:13.618810 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:20:13.618832 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:20:13.618855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:13.618882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:13.618902 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:13.618923 kernel: loop: module loaded Nov 8 00:20:13.618944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:13.618988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:13.619069 systemd-journald[1099]: Collecting audit messages is disabled. Nov 8 00:20:13.619178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:13.619206 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:20:13.619230 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:20:13.619254 systemd-journald[1099]: Journal started Nov 8 00:20:13.619295 systemd-journald[1099]: Runtime Journal (/run/log/journal/c84c2f39de5d433b934969935b3ae2d1) is 4.9M, max 39.3M, 34.4M free. Nov 8 00:20:13.181356 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:20:13.212485 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:20:13.213085 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:20:13.623223 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:20:13.626362 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:13.626578 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:13.627706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:20:13.628680 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:20:13.629507 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:20:13.653943 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:20:13.665272 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:20:13.679222 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:20:13.679862 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:20:13.679937 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:20:13.682426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:20:13.698564 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:20:13.704178 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:20:13.706558 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:13.712508 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:20:13.716258 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:20:13.719340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:13.726527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:20:13.727256 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:13.733386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:20:13.736261 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:20:13.741440 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:20:13.742519 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:20:13.743315 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:20:13.744172 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:20:13.765613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:20:13.779200 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:20:13.798738 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:20:13.805696 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:20:13.813424 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:20:13.840704 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:20:13.847932 systemd-journald[1099]: Time spent on flushing to /var/log/journal/c84c2f39de5d433b934969935b3ae2d1 is 123.931ms for 990 entries. Nov 8 00:20:13.847932 systemd-journald[1099]: System Journal (/var/log/journal/c84c2f39de5d433b934969935b3ae2d1) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:20:14.003497 systemd-journald[1099]: Received client request to flush runtime journal. Nov 8 00:20:14.003568 kernel: loop1: detected capacity change from 0 to 229808 Nov 8 00:20:14.003587 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:20:13.889315 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:20:13.891702 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:20:13.921214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:20:13.976939 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:20:13.980514 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:20:13.994482 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:20:14.008518 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:20:14.010918 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:20:14.059202 kernel: loop3: detected capacity change from 0 to 8 Nov 8 00:20:14.075549 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:20:14.100146 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:20:14.101334 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Nov 8 00:20:14.101368 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Nov 8 00:20:14.114725 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:20:14.138128 kernel: loop5: detected capacity change from 0 to 229808 Nov 8 00:20:14.180131 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:20:14.210137 kernel: loop7: detected capacity change from 0 to 8 Nov 8 00:20:14.214741 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Nov 8 00:20:14.216544 (sd-merge)[1174]: Merged extensions into '/usr'. Nov 8 00:20:14.226804 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:20:14.226829 systemd[1]: Reloading... Nov 8 00:20:14.430145 zram_generator::config[1201]: No configuration found. Nov 8 00:20:14.581147 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:20:14.689199 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:14.771420 systemd[1]: Reloading finished in 541 ms. Nov 8 00:20:14.800791 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:20:14.805056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:20:14.818608 systemd[1]: Starting ensure-sysext.service... Nov 8 00:20:14.824524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:20:14.836792 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:20:14.836949 systemd[1]: Reloading... Nov 8 00:20:14.926186 zram_generator::config[1268]: No configuration found. Nov 8 00:20:14.933384 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:20:14.933836 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:20:14.941603 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:20:14.942059 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Nov 8 00:20:14.942192 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Nov 8 00:20:14.962408 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:14.962427 systemd-tmpfiles[1245]: Skipping /boot Nov 8 00:20:14.994310 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:20:14.994325 systemd-tmpfiles[1245]: Skipping /boot Nov 8 00:20:15.113796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:15.175284 systemd[1]: Reloading finished in 337 ms. Nov 8 00:20:15.198248 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:20:15.203037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:20:15.217480 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:15.220752 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:20:15.229702 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:20:15.238985 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:20:15.247427 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:20:15.256548 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:20:15.266012 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.266270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:15.275574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:15.281482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:15.285571 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:15.286294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:15.292557 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:20:15.293158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.297833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.298035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:15.299288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:15.299420 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.306549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.306898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:15.325549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:20:15.326508 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:15.326774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.337314 systemd[1]: Finished ensure-sysext.service. Nov 8 00:20:15.341282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:20:15.342893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:20:15.358406 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:20:15.361886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:20:15.363070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:15.363284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:15.378843 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Nov 8 00:20:15.407496 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:15.407675 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:15.408485 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:15.408883 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:20:15.409047 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:20:15.417223 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:20:15.419940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:15.420872 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:15.422847 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:15.422893 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:20:15.436775 augenrules[1352]: No rules Nov 8 00:20:15.438999 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:15.451741 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:20:15.454466 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:20:15.455348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:20:15.465420 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:20:15.623785 systemd-resolved[1320]: Positive Trust Anchors: Nov 8 00:20:15.623824 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:20:15.623886 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:20:15.633509 systemd-resolved[1320]: Using system hostname 'ci-4081.3.6-n-f4234a6c60'. Nov 8 00:20:15.637133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:20:15.637790 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:20:15.650364 systemd-networkd[1367]: lo: Link UP Nov 8 00:20:15.651804 systemd-networkd[1367]: lo: Gained carrier Nov 8 00:20:15.655969 systemd-networkd[1367]: Enumeration completed Nov 8 00:20:15.656755 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:20:15.657749 systemd[1]: Reached target network.target - Network. Nov 8 00:20:15.664829 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:20:15.671699 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:20:15.672651 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:20:15.711203 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:20:15.723359 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Nov 8 00:20:15.724011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.724250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:20:15.732849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:20:15.741461 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:20:15.745513 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:20:15.750216 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1365) Nov 8 00:20:15.746180 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:20:15.746244 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:20:15.746267 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:20:15.770630 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:20:15.773835 kernel: ISO 9660 Extensions: RRIP_1991A Nov 8 00:20:15.772116 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:20:15.778152 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Nov 8 00:20:15.802868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:20:15.804321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:20:15.805692 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:20:15.809667 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:20:15.809940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:20:15.813909 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:20:15.820274 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-e2:42:52:bb:fa:61.network. Nov 8 00:20:15.821610 systemd-networkd[1367]: eth0: Link UP Nov 8 00:20:15.821619 systemd-networkd[1367]: eth0: Gained carrier Nov 8 00:20:15.830417 systemd-timesyncd[1338]: Network configuration changed, trying to establish connection. Nov 8 00:20:15.893664 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-aa:00:ce:a0:c8:53.network. Nov 8 00:20:15.896398 systemd-networkd[1367]: eth1: Link UP Nov 8 00:20:15.896535 systemd-networkd[1367]: eth1: Gained carrier Nov 8 00:20:15.915195 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:20:15.927236 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:20:15.933825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:20:15.941391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:20:15.945144 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Nov 8 00:20:15.957134 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:20:15.980824 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:20:16.057560 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Nov 8 00:20:16.057682 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Nov 8 00:20:16.066582 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:20:16.066702 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:20:16.071696 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:20:16.071830 kernel: [drm] features: -context_init Nov 8 00:20:16.073878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:16.082582 kernel: [drm] number of scanouts: 1 Nov 8 00:20:16.082768 kernel: [drm] number of cap sets: 0 Nov 8 00:20:16.085198 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Nov 8 00:20:16.091619 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:20:16.091734 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:20:16.092954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:16.094385 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:16.100741 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:20:16.111556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:17.039550 systemd-resolved[1320]: Clock change detected. Flushing caches. Nov 8 00:20:17.039583 systemd-timesyncd[1338]: Contacted time server 15.204.244.66:123 (0.flatcar.pool.ntp.org). Nov 8 00:20:17.039695 systemd-timesyncd[1338]: Initial clock synchronization to Sat 2025-11-08 00:20:17.039330 UTC. Nov 8 00:20:17.061947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:20:17.062191 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:17.107531 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:20:17.193533 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:20:17.221588 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:20:17.232182 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:20:17.233256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:20:17.249155 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:17.284744 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:20:17.287033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:20:17.287245 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:20:17.287834 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:20:17.289862 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:20:17.290313 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:20:17.290558 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:20:17.290669 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:20:17.290749 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:20:17.290790 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:20:17.290861 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:20:17.295075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:20:17.298055 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:20:17.306578 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:20:17.310251 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:20:17.311597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:20:17.314576 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:20:17.315946 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:20:17.318267 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:17.318300 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:20:17.325078 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:20:17.329124 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:20:17.336168 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:20:17.342155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:20:17.353151 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:20:17.362566 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:20:17.364091 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:20:17.369706 jq[1432]: false Nov 8 00:20:17.374143 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:20:17.383076 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:20:17.387386 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:20:17.399142 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:20:17.414894 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:20:17.418001 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:20:17.418556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:20:17.421478 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:20:17.427083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:20:17.431105 dbus-daemon[1431]: [system] SELinux support is enabled Nov 8 00:20:17.432343 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:20:17.435795 coreos-metadata[1430]: Nov 08 00:20:17.435 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:17.443270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:20:17.451178 coreos-metadata[1430]: Nov 08 00:20:17.447 INFO Fetch successful Nov 8 00:20:17.453839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:20:17.455139 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:20:17.467298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:20:17.467354 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:20:17.469424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:20:17.469513 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Nov 8 00:20:17.469534 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:20:17.483784 update_engine[1443]: I20251108 00:20:17.483650 1443 main.cc:92] Flatcar Update Engine starting Nov 8 00:20:17.487362 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:20:17.490378 update_engine[1443]: I20251108 00:20:17.490128 1443 update_check_scheduler.cc:74] Next update check in 9m2s Nov 8 00:20:17.496113 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:20:17.518130 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:20:17.518379 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:20:17.528554 extend-filesystems[1433]: Found loop4 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found loop5 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found loop6 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found loop7 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda1 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda2 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda3 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found usr Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda4 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda6 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda7 Nov 8 00:20:17.535578 extend-filesystems[1433]: Found vda9 Nov 8 00:20:17.535578 extend-filesystems[1433]: Checking size of /dev/vda9 Nov 8 00:20:17.533156 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:20:17.595558 jq[1444]: true Nov 8 00:20:17.595649 extend-filesystems[1433]: Resized partition /dev/vda9 Nov 8 00:20:17.713546 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1379) Nov 8 00:20:17.713592 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Nov 8 00:20:17.533633 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:20:17.718609 tar[1451]: linux-amd64/LICENSE Nov 8 00:20:17.718609 tar[1451]: linux-amd64/helm Nov 8 00:20:17.723136 extend-filesystems[1474]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:20:17.577466 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:20:17.587139 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:20:17.735245 jq[1465]: true Nov 8 00:20:17.593974 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:20:17.616572 systemd-logind[1440]: New seat seat0. Nov 8 00:20:17.716562 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:20:17.716592 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:20:17.719786 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:20:17.796508 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 8 00:20:17.817770 extend-filesystems[1474]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:20:17.817770 extend-filesystems[1474]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 8 00:20:17.817770 extend-filesystems[1474]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 8 00:20:17.843404 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Nov 8 00:20:17.843404 extend-filesystems[1433]: Found vdb Nov 8 00:20:17.823013 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:20:17.823302 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:20:17.865938 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:20:17.865952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:20:17.889474 systemd[1]: Starting sshkeys.service... Nov 8 00:20:17.927190 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:20:17.940152 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:20:17.982843 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:20:18.001091 coreos-metadata[1505]: Nov 08 00:20:18.000 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Nov 8 00:20:18.014842 coreos-metadata[1505]: Nov 08 00:20:18.014 INFO Fetch successful Nov 8 00:20:18.050569 unknown[1505]: wrote ssh authorized keys file for user: core Nov 8 00:20:18.079300 systemd-networkd[1367]: eth0: Gained IPv6LL Nov 8 00:20:18.080067 systemd-networkd[1367]: eth1: Gained IPv6LL Nov 8 00:20:18.095609 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:20:18.106312 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:20:18.120313 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:18.132501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:20:18.150331 update-ssh-keys[1510]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:20:18.153552 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:20:18.167887 systemd[1]: Finished sshkeys.service. Nov 8 00:20:18.242500 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:20:18.269357 containerd[1463]: time="2025-11-08T00:20:18.267446766Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:20:18.347008 containerd[1463]: time="2025-11-08T00:20:18.346915628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.352071 containerd[1463]: time="2025-11-08T00:20:18.352002751Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352563958Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352595476Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352785745Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352805796Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352865897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:18.353041 containerd[1463]: time="2025-11-08T00:20:18.352919505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.354213 containerd[1463]: time="2025-11-08T00:20:18.353695523Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:18.354213 containerd[1463]: time="2025-11-08T00:20:18.353720792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.354213 containerd[1463]: time="2025-11-08T00:20:18.353736461Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:18.354213 containerd[1463]: time="2025-11-08T00:20:18.353746697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.354213 containerd[1463]: time="2025-11-08T00:20:18.353848436Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.355100 containerd[1463]: time="2025-11-08T00:20:18.354642443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:20:18.355344 containerd[1463]: time="2025-11-08T00:20:18.355323709Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:20:18.355900 containerd[1463]: time="2025-11-08T00:20:18.355765598Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:20:18.355991 containerd[1463]: time="2025-11-08T00:20:18.355973088Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:20:18.356104 containerd[1463]: time="2025-11-08T00:20:18.356090934Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:20:18.364474 containerd[1463]: time="2025-11-08T00:20:18.363745129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:20:18.364664 containerd[1463]: time="2025-11-08T00:20:18.364644045Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:20:18.364988 containerd[1463]: time="2025-11-08T00:20:18.364755932Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:20:18.364988 containerd[1463]: time="2025-11-08T00:20:18.364786988Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:20:18.364988 containerd[1463]: time="2025-11-08T00:20:18.364807349Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:20:18.365161 containerd[1463]: time="2025-11-08T00:20:18.365146784Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:20:18.366121 containerd[1463]: time="2025-11-08T00:20:18.366093626Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366380355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366404089Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366422318Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366437024Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366453514Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366466314Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366482156Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366498575Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366512277Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366524298Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366537297Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366561252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366575587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367107 containerd[1463]: time="2025-11-08T00:20:18.366589586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366606139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366618576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366632012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366671362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366688716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366703761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366720521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366733574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366774576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366790708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366811103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:20:18.367577 containerd[1463]: time="2025-11-08T00:20:18.366859491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.368925990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.368956362Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369037429Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369062109Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369186348Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369204179Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369215186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369228290Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369240903Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:20:18.370754 containerd[1463]: time="2025-11-08T00:20:18.369252974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:20:18.371064 containerd[1463]: time="2025-11-08T00:20:18.369595755Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:20:18.371064 containerd[1463]: time="2025-11-08T00:20:18.369656390Z" level=info msg="Connect containerd service" Nov 8 00:20:18.371064 containerd[1463]: time="2025-11-08T00:20:18.369700349Z" level=info msg="using legacy CRI server" Nov 8 00:20:18.371064 containerd[1463]: time="2025-11-08T00:20:18.369707719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:20:18.371064 containerd[1463]: time="2025-11-08T00:20:18.369846266Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.373383641Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374005284Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374055482Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374107620Z" level=info msg="Start subscribing containerd event" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374167709Z" level=info msg="Start recovering state" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374262419Z" level=info msg="Start event monitor" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374278262Z" level=info msg="Start snapshots syncer" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374288508Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:20:18.378122 containerd[1463]: time="2025-11-08T00:20:18.374296110Z" level=info msg="Start streaming server" Nov 8 00:20:18.374494 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:20:18.385196 containerd[1463]: time="2025-11-08T00:20:18.384768610Z" level=info msg="containerd successfully booted in 0.120694s" Nov 8 00:20:18.570381 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:20:18.628030 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:20:18.644536 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:20:18.680637 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:20:18.682191 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:20:18.697407 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:20:18.749847 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:20:18.767481 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:20:18.782564 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:20:18.783836 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:20:18.814982 tar[1451]: linux-amd64/README.md Nov 8 00:20:18.832274 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:20:19.459919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:19.460961 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:20:19.463151 systemd[1]: Startup finished in 1.400s (kernel) + 5.747s (initrd) + 6.079s (userspace) = 13.227s. Nov 8 00:20:19.476548 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:20.207852 kubelet[1552]: E1108 00:20:20.207790 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:20.211037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:20.211211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:20.211759 systemd[1]: kubelet.service: Consumed 1.355s CPU time. Nov 8 00:20:21.180482 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:20:21.192389 systemd[1]: Started sshd@0-64.23.225.39:22-139.178.68.195:33880.service - OpenSSH per-connection server daemon (139.178.68.195:33880). Nov 8 00:20:21.256381 sshd[1564]: Accepted publickey for core from 139.178.68.195 port 33880 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:21.259301 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:21.272573 systemd-logind[1440]: New session 1 of user core. Nov 8 00:20:21.274400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:20:21.284352 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:20:21.302379 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:20:21.310347 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:20:21.324473 (systemd)[1568]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:20:21.490915 systemd[1568]: Queued start job for default target default.target. Nov 8 00:20:21.503367 systemd[1568]: Created slice app.slice - User Application Slice. Nov 8 00:20:21.503410 systemd[1568]: Reached target paths.target - Paths. Nov 8 00:20:21.503426 systemd[1568]: Reached target timers.target - Timers. Nov 8 00:20:21.505276 systemd[1568]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:20:21.520218 systemd[1568]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:20:21.520365 systemd[1568]: Reached target sockets.target - Sockets. Nov 8 00:20:21.520382 systemd[1568]: Reached target basic.target - Basic System. Nov 8 00:20:21.520446 systemd[1568]: Reached target default.target - Main User Target. Nov 8 00:20:21.520499 systemd[1568]: Startup finished in 181ms. Nov 8 00:20:21.520656 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:20:21.527164 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:20:21.603265 systemd[1]: Started sshd@1-64.23.225.39:22-139.178.68.195:33882.service - OpenSSH per-connection server daemon (139.178.68.195:33882). Nov 8 00:20:21.654103 sshd[1579]: Accepted publickey for core from 139.178.68.195 port 33882 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:21.656453 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:21.662134 systemd-logind[1440]: New session 2 of user core. Nov 8 00:20:21.669154 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:20:21.732426 sshd[1579]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:21.746985 systemd[1]: sshd@1-64.23.225.39:22-139.178.68.195:33882.service: Deactivated successfully. Nov 8 00:20:21.749546 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:20:21.751475 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:20:21.756248 systemd[1]: Started sshd@2-64.23.225.39:22-139.178.68.195:33886.service - OpenSSH per-connection server daemon (139.178.68.195:33886). Nov 8 00:20:21.757777 systemd-logind[1440]: Removed session 2. Nov 8 00:20:21.803361 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 33886 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:21.805761 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:21.813962 systemd-logind[1440]: New session 3 of user core. Nov 8 00:20:21.821303 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:20:21.881193 sshd[1586]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:21.894313 systemd[1]: sshd@2-64.23.225.39:22-139.178.68.195:33886.service: Deactivated successfully. Nov 8 00:20:21.896432 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:20:21.898054 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:20:21.903404 systemd[1]: Started sshd@3-64.23.225.39:22-139.178.68.195:33900.service - OpenSSH per-connection server daemon (139.178.68.195:33900). Nov 8 00:20:21.905312 systemd-logind[1440]: Removed session 3. Nov 8 00:20:21.960767 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 33900 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:21.962682 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:21.970921 systemd-logind[1440]: New session 4 of user core. Nov 8 00:20:21.978170 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:20:22.042111 sshd[1593]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:22.054131 systemd[1]: sshd@3-64.23.225.39:22-139.178.68.195:33900.service: Deactivated successfully. Nov 8 00:20:22.056781 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:20:22.058666 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:20:22.064227 systemd[1]: Started sshd@4-64.23.225.39:22-139.178.68.195:33914.service - OpenSSH per-connection server daemon (139.178.68.195:33914). Nov 8 00:20:22.065958 systemd-logind[1440]: Removed session 4. Nov 8 00:20:22.106047 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 33914 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:22.108428 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:22.114586 systemd-logind[1440]: New session 5 of user core. Nov 8 00:20:22.128435 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:20:22.201463 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:20:22.201933 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:22.222072 sudo[1603]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:22.226127 sshd[1600]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:22.235170 systemd[1]: sshd@4-64.23.225.39:22-139.178.68.195:33914.service: Deactivated successfully. Nov 8 00:20:22.237612 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:20:22.240089 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:20:22.244315 systemd[1]: Started sshd@5-64.23.225.39:22-139.178.68.195:33930.service - OpenSSH per-connection server daemon (139.178.68.195:33930). Nov 8 00:20:22.246230 systemd-logind[1440]: Removed session 5. Nov 8 00:20:22.300937 sshd[1608]: Accepted publickey for core from 139.178.68.195 port 33930 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:22.303462 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:22.310483 systemd-logind[1440]: New session 6 of user core. Nov 8 00:20:22.313133 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:20:22.395810 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:20:22.396383 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:22.401365 sudo[1612]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:22.409536 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:20:22.410349 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:22.431311 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:22.433901 auditctl[1615]: No rules Nov 8 00:20:22.434347 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:20:22.434575 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:22.438523 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:20:22.489217 augenrules[1633]: No rules Nov 8 00:20:22.490902 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:20:22.492672 sudo[1611]: pam_unix(sudo:session): session closed for user root Nov 8 00:20:22.498131 sshd[1608]: pam_unix(sshd:session): session closed for user core Nov 8 00:20:22.507861 systemd[1]: sshd@5-64.23.225.39:22-139.178.68.195:33930.service: Deactivated successfully. Nov 8 00:20:22.510658 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:20:22.514686 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:20:22.519839 systemd[1]: Started sshd@6-64.23.225.39:22-139.178.68.195:33936.service - OpenSSH per-connection server daemon (139.178.68.195:33936). Nov 8 00:20:22.521543 systemd-logind[1440]: Removed session 6. Nov 8 00:20:22.578008 sshd[1641]: Accepted publickey for core from 139.178.68.195 port 33936 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:20:22.580260 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:20:22.587078 systemd-logind[1440]: New session 7 of user core. Nov 8 00:20:22.594271 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:20:22.655250 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:20:22.655591 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:20:23.141433 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:20:23.141939 (dockerd)[1660]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:20:23.958193 dockerd[1660]: time="2025-11-08T00:20:23.956916583Z" level=info msg="Starting up" Nov 8 00:20:24.079152 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2317365747-merged.mount: Deactivated successfully. Nov 8 00:20:24.092513 systemd[1]: var-lib-docker-metacopy\x2dcheck1588410420-merged.mount: Deactivated successfully. Nov 8 00:20:24.113591 dockerd[1660]: time="2025-11-08T00:20:24.113540251Z" level=info msg="Loading containers: start." Nov 8 00:20:24.247967 kernel: Initializing XFRM netlink socket Nov 8 00:20:24.366981 systemd-networkd[1367]: docker0: Link UP Nov 8 00:20:24.388175 dockerd[1660]: time="2025-11-08T00:20:24.386879860Z" level=info msg="Loading containers: done." Nov 8 00:20:24.407051 dockerd[1660]: time="2025-11-08T00:20:24.406985955Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:20:24.407281 dockerd[1660]: time="2025-11-08T00:20:24.407110405Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:20:24.407281 dockerd[1660]: time="2025-11-08T00:20:24.407222261Z" level=info msg="Daemon has completed initialization" Nov 8 00:20:24.450351 dockerd[1660]: time="2025-11-08T00:20:24.449602211Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:20:24.450057 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:20:25.393443 containerd[1463]: time="2025-11-08T00:20:25.392825152Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:20:26.099250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795043636.mount: Deactivated successfully. Nov 8 00:20:27.383823 containerd[1463]: time="2025-11-08T00:20:27.382450467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.383823 containerd[1463]: time="2025-11-08T00:20:27.383315083Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Nov 8 00:20:27.383823 containerd[1463]: time="2025-11-08T00:20:27.383766856Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.387077 containerd[1463]: time="2025-11-08T00:20:27.387032808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:27.388553 containerd[1463]: time="2025-11-08T00:20:27.388478094Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.99557308s" Nov 8 00:20:27.388689 containerd[1463]: time="2025-11-08T00:20:27.388558635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:20:27.389212 containerd[1463]: time="2025-11-08T00:20:27.389185759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:20:28.979927 containerd[1463]: time="2025-11-08T00:20:28.979412044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:28.981241 containerd[1463]: time="2025-11-08T00:20:28.980963497Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Nov 8 00:20:28.982451 containerd[1463]: time="2025-11-08T00:20:28.981778611Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:28.985733 containerd[1463]: time="2025-11-08T00:20:28.985668884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:28.987407 containerd[1463]: time="2025-11-08T00:20:28.987358566Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.598137183s" Nov 8 00:20:28.987622 containerd[1463]: time="2025-11-08T00:20:28.987591154Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:20:28.988894 containerd[1463]: time="2025-11-08T00:20:28.988830108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:20:30.256511 containerd[1463]: time="2025-11-08T00:20:30.255105542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:30.258269 containerd[1463]: time="2025-11-08T00:20:30.258189306Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Nov 8 00:20:30.259317 containerd[1463]: time="2025-11-08T00:20:30.259252987Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:30.262801 containerd[1463]: time="2025-11-08T00:20:30.262735386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:30.265399 containerd[1463]: time="2025-11-08T00:20:30.264692882Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.27581476s" Nov 8 00:20:30.265399 containerd[1463]: time="2025-11-08T00:20:30.264757467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:20:30.265632 containerd[1463]: time="2025-11-08T00:20:30.265484066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:20:30.330502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:20:30.336514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:30.557483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:30.570535 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:20:30.638975 kubelet[1880]: E1108 00:20:30.638918 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:20:30.644814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:20:30.645016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:20:31.373727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794281800.mount: Deactivated successfully. Nov 8 00:20:32.005684 containerd[1463]: time="2025-11-08T00:20:32.004938236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:32.006539 containerd[1463]: time="2025-11-08T00:20:32.006484855Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Nov 8 00:20:32.008885 containerd[1463]: time="2025-11-08T00:20:32.008812041Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:32.015225 containerd[1463]: time="2025-11-08T00:20:32.015067395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:32.016232 containerd[1463]: time="2025-11-08T00:20:32.015723699Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.750201518s" Nov 8 00:20:32.016232 containerd[1463]: time="2025-11-08T00:20:32.015763288Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:20:32.016852 containerd[1463]: time="2025-11-08T00:20:32.016824921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:20:32.018326 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Nov 8 00:20:32.569105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791458426.mount: Deactivated successfully. Nov 8 00:20:33.587399 containerd[1463]: time="2025-11-08T00:20:33.585981165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:33.587399 containerd[1463]: time="2025-11-08T00:20:33.587329830Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Nov 8 00:20:33.588190 containerd[1463]: time="2025-11-08T00:20:33.588145424Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:33.592058 containerd[1463]: time="2025-11-08T00:20:33.591993523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:33.593997 containerd[1463]: time="2025-11-08T00:20:33.593936824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.577073619s" Nov 8 00:20:33.593997 containerd[1463]: time="2025-11-08T00:20:33.593997841Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:20:33.595314 containerd[1463]: time="2025-11-08T00:20:33.595246026Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:20:34.072799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762514185.mount: Deactivated successfully. Nov 8 00:20:34.077351 containerd[1463]: time="2025-11-08T00:20:34.077271676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:34.078201 containerd[1463]: time="2025-11-08T00:20:34.078140442Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 8 00:20:34.078643 containerd[1463]: time="2025-11-08T00:20:34.078617482Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:34.081528 containerd[1463]: time="2025-11-08T00:20:34.081480758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:34.082268 containerd[1463]: time="2025-11-08T00:20:34.082226477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 486.934926ms" Nov 8 00:20:34.082365 containerd[1463]: time="2025-11-08T00:20:34.082272148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:20:34.083435 containerd[1463]: time="2025-11-08T00:20:34.083410310Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:20:34.757314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254764972.mount: Deactivated successfully. Nov 8 00:20:35.103172 systemd-resolved[1320]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Nov 8 00:20:36.692242 containerd[1463]: time="2025-11-08T00:20:36.691907264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:36.693946 containerd[1463]: time="2025-11-08T00:20:36.693842160Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Nov 8 00:20:36.696722 containerd[1463]: time="2025-11-08T00:20:36.696630341Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:36.700921 containerd[1463]: time="2025-11-08T00:20:36.700134993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:20:36.702662 containerd[1463]: time="2025-11-08T00:20:36.702049493Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.618522089s" Nov 8 00:20:36.702662 containerd[1463]: time="2025-11-08T00:20:36.702111166Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:20:40.489576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:40.500423 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:40.547900 systemd[1]: Reloading requested from client PID 2034 ('systemctl') (unit session-7.scope)... Nov 8 00:20:40.547926 systemd[1]: Reloading... Nov 8 00:20:40.697919 zram_generator::config[2073]: No configuration found. Nov 8 00:20:40.827518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:40.906006 systemd[1]: Reloading finished in 357 ms. Nov 8 00:20:40.958043 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:20:40.958135 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:20:40.958383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:40.965286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:41.097196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:41.108601 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:41.158759 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:41.160903 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:41.160903 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:41.160903 kubelet[2127]: I1108 00:20:41.159307 2127 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:41.765003 kubelet[2127]: I1108 00:20:41.764946 2127 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:20:41.765003 kubelet[2127]: I1108 00:20:41.764992 2127 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:41.765409 kubelet[2127]: I1108 00:20:41.765386 2127 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:20:41.797460 kubelet[2127]: I1108 00:20:41.797338 2127 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:41.799905 kubelet[2127]: E1108 00:20:41.798771 2127 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://64.23.225.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:20:41.813184 kubelet[2127]: E1108 00:20:41.813141 2127 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:41.813370 kubelet[2127]: I1108 00:20:41.813357 2127 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:41.818636 kubelet[2127]: I1108 00:20:41.818581 2127 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:20:41.823834 kubelet[2127]: I1108 00:20:41.823739 2127 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:41.826247 kubelet[2127]: I1108 00:20:41.823826 2127 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f4234a6c60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:20:41.826247 kubelet[2127]: I1108 00:20:41.826238 2127 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:41.826247 kubelet[2127]: I1108 00:20:41.826261 2127 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:20:41.826526 kubelet[2127]: I1108 00:20:41.826488 2127 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:41.830461 kubelet[2127]: I1108 00:20:41.830161 2127 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:20:41.830461 kubelet[2127]: I1108 00:20:41.830218 2127 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:41.830461 kubelet[2127]: I1108 00:20:41.830271 2127 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:20:41.830461 kubelet[2127]: I1108 00:20:41.830302 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:41.838506 kubelet[2127]: E1108 00:20:41.838466 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.225.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f4234a6c60&limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:20:41.842360 kubelet[2127]: E1108 00:20:41.841707 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.225.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:20:41.842889 kubelet[2127]: I1108 00:20:41.842716 2127 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:41.844327 kubelet[2127]: I1108 00:20:41.844256 2127 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:20:41.845899 kubelet[2127]: W1108 00:20:41.845081 2127 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:20:41.852184 kubelet[2127]: I1108 00:20:41.852147 2127 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:20:41.852316 kubelet[2127]: I1108 00:20:41.852242 2127 server.go:1289] "Started kubelet" Nov 8 00:20:41.854066 kubelet[2127]: I1108 00:20:41.852976 2127 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:41.854066 kubelet[2127]: I1108 00:20:41.853880 2127 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:20:41.857904 kubelet[2127]: I1108 00:20:41.857282 2127 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:41.857904 kubelet[2127]: I1108 00:20:41.857815 2127 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:41.860075 kubelet[2127]: E1108 00:20:41.858302 2127 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.23.225.39:6443/api/v1/namespaces/default/events\": dial tcp 64.23.225.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-f4234a6c60.1875e01c6b788289 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-f4234a6c60,UID:ci-4081.3.6-n-f4234a6c60,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-f4234a6c60,},FirstTimestamp:2025-11-08 00:20:41.852183177 +0000 UTC m=+0.738278316,LastTimestamp:2025-11-08 00:20:41.852183177 +0000 UTC m=+0.738278316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-f4234a6c60,}" Nov 8 00:20:41.863241 kubelet[2127]: I1108 00:20:41.862321 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:41.863757 kubelet[2127]: I1108 00:20:41.863731 2127 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:41.864485 kubelet[2127]: I1108 00:20:41.864460 2127 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:20:41.864916 kubelet[2127]: I1108 00:20:41.864896 2127 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:20:41.865016 kubelet[2127]: I1108 00:20:41.864986 2127 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:20:41.866206 kubelet[2127]: E1108 00:20:41.865976 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.225.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:20:41.868082 kubelet[2127]: E1108 00:20:41.866551 2127 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" Nov 8 00:20:41.869897 kubelet[2127]: E1108 00:20:41.869178 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.225.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f4234a6c60?timeout=10s\": dial tcp 64.23.225.39:6443: connect: connection refused" interval="200ms" Nov 8 00:20:41.869897 kubelet[2127]: I1108 00:20:41.869885 2127 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:20:41.870060 kubelet[2127]: I1108 00:20:41.870002 2127 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:41.874024 kubelet[2127]: I1108 00:20:41.873995 2127 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:20:41.898229 kubelet[2127]: I1108 00:20:41.895298 2127 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:41.898229 kubelet[2127]: I1108 00:20:41.896553 2127 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:41.898229 kubelet[2127]: I1108 00:20:41.896587 2127 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:20:41.898229 kubelet[2127]: I1108 00:20:41.896614 2127 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:41.898229 kubelet[2127]: I1108 00:20:41.896623 2127 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:20:41.898229 kubelet[2127]: E1108 00:20:41.896669 2127 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:41.910007 kubelet[2127]: E1108 00:20:41.909966 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.225.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:20:41.911273 kubelet[2127]: I1108 00:20:41.911245 2127 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:41.911273 kubelet[2127]: I1108 00:20:41.911262 2127 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:41.911273 kubelet[2127]: I1108 00:20:41.911278 2127 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:41.912983 kubelet[2127]: I1108 00:20:41.912951 2127 policy_none.go:49] "None policy: Start" Nov 8 00:20:41.912983 kubelet[2127]: I1108 00:20:41.912980 2127 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:20:41.913154 kubelet[2127]: I1108 00:20:41.912992 2127 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:20:41.920176 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:20:41.937389 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:20:41.942160 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:20:41.952154 kubelet[2127]: E1108 00:20:41.950246 2127 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:20:41.952154 kubelet[2127]: I1108 00:20:41.950481 2127 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:41.952154 kubelet[2127]: I1108 00:20:41.950495 2127 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:41.952154 kubelet[2127]: I1108 00:20:41.951914 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:41.956248 kubelet[2127]: E1108 00:20:41.956133 2127 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:41.956248 kubelet[2127]: E1108 00:20:41.956209 2127 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-f4234a6c60\" not found" Nov 8 00:20:42.014759 systemd[1]: Created slice kubepods-burstable-pode9033b29b3bebc82f2844b24705a694c.slice - libcontainer container kubepods-burstable-pode9033b29b3bebc82f2844b24705a694c.slice. Nov 8 00:20:42.031225 kubelet[2127]: E1108 00:20:42.031085 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.036750 systemd[1]: Created slice kubepods-burstable-podc2e094eddf3b9cae299358274a4a4747.slice - libcontainer container kubepods-burstable-podc2e094eddf3b9cae299358274a4a4747.slice. Nov 8 00:20:42.046821 kubelet[2127]: E1108 00:20:42.046770 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.052028 systemd[1]: Created slice kubepods-burstable-poda719e2dd4f7bc48ebdfa53718c1b546c.slice - libcontainer container kubepods-burstable-poda719e2dd4f7bc48ebdfa53718c1b546c.slice. Nov 8 00:20:42.053983 kubelet[2127]: I1108 00:20:42.053938 2127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.054410 kubelet[2127]: E1108 00:20:42.054377 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.225.39:6443/api/v1/nodes\": dial tcp 64.23.225.39:6443: connect: connection refused" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.057566 kubelet[2127]: E1108 00:20:42.057534 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066258 kubelet[2127]: I1108 00:20:42.066207 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066464 kubelet[2127]: I1108 00:20:42.066264 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066464 kubelet[2127]: I1108 00:20:42.066297 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066464 kubelet[2127]: I1108 00:20:42.066326 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066464 kubelet[2127]: I1108 00:20:42.066353 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066464 kubelet[2127]: I1108 00:20:42.066377 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066715 kubelet[2127]: I1108 00:20:42.066402 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a719e2dd4f7bc48ebdfa53718c1b546c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f4234a6c60\" (UID: \"a719e2dd4f7bc48ebdfa53718c1b546c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066715 kubelet[2127]: I1108 00:20:42.066429 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.066715 kubelet[2127]: I1108 00:20:42.066455 2127 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.070287 kubelet[2127]: E1108 00:20:42.070238 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.225.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f4234a6c60?timeout=10s\": dial tcp 64.23.225.39:6443: connect: connection refused" interval="400ms" Nov 8 00:20:42.256473 kubelet[2127]: I1108 00:20:42.256288 2127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.257264 kubelet[2127]: E1108 00:20:42.257232 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.225.39:6443/api/v1/nodes\": dial tcp 64.23.225.39:6443: connect: connection refused" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.334153 kubelet[2127]: E1108 00:20:42.333668 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:42.336396 containerd[1463]: time="2025-11-08T00:20:42.336339177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f4234a6c60,Uid:e9033b29b3bebc82f2844b24705a694c,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:42.338212 systemd-resolved[1320]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Nov 8 00:20:42.348854 kubelet[2127]: E1108 00:20:42.348209 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:42.354839 containerd[1463]: time="2025-11-08T00:20:42.354507482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f4234a6c60,Uid:c2e094eddf3b9cae299358274a4a4747,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:42.358777 kubelet[2127]: E1108 00:20:42.358464 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:42.359150 containerd[1463]: time="2025-11-08T00:20:42.359100222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f4234a6c60,Uid:a719e2dd4f7bc48ebdfa53718c1b546c,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:42.470945 kubelet[2127]: E1108 00:20:42.470861 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.225.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f4234a6c60?timeout=10s\": dial tcp 64.23.225.39:6443: connect: connection refused" interval="800ms" Nov 8 00:20:42.659809 kubelet[2127]: I1108 00:20:42.659240 2127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.659809 kubelet[2127]: E1108 00:20:42.659603 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.225.39:6443/api/v1/nodes\": dial tcp 64.23.225.39:6443: connect: connection refused" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:42.803335 kubelet[2127]: E1108 00:20:42.803278 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://64.23.225.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:20:42.841839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3807610200.mount: Deactivated successfully. Nov 8 00:20:42.850596 containerd[1463]: time="2025-11-08T00:20:42.850539559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:42.851432 containerd[1463]: time="2025-11-08T00:20:42.851393204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:20:42.852455 containerd[1463]: time="2025-11-08T00:20:42.852381898Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:42.853430 containerd[1463]: time="2025-11-08T00:20:42.853285530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:42.853988 containerd[1463]: time="2025-11-08T00:20:42.853943248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:20:42.854218 containerd[1463]: time="2025-11-08T00:20:42.854168490Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:42.859907 containerd[1463]: time="2025-11-08T00:20:42.858357780Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:42.859907 containerd[1463]: time="2025-11-08T00:20:42.859216655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 500.021577ms" Nov 8 00:20:42.861016 containerd[1463]: time="2025-11-08T00:20:42.860983941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 506.37559ms" Nov 8 00:20:42.862582 containerd[1463]: time="2025-11-08T00:20:42.862528927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.089934ms" Nov 8 00:20:42.863127 containerd[1463]: time="2025-11-08T00:20:42.863099348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:20:42.905890 kubelet[2127]: E1108 00:20:42.905828 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://64.23.225.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:20:42.981013 kubelet[2127]: E1108 00:20:42.980378 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://64.23.225.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:20:43.028343 containerd[1463]: time="2025-11-08T00:20:43.028221910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:43.028343 containerd[1463]: time="2025-11-08T00:20:43.028284625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:43.028759 containerd[1463]: time="2025-11-08T00:20:43.028427748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.028992 containerd[1463]: time="2025-11-08T00:20:43.028915320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.034518 containerd[1463]: time="2025-11-08T00:20:43.034197188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:43.034518 containerd[1463]: time="2025-11-08T00:20:43.034273373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:43.034518 containerd[1463]: time="2025-11-08T00:20:43.034292489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.034518 containerd[1463]: time="2025-11-08T00:20:43.034379618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.039834 containerd[1463]: time="2025-11-08T00:20:43.039516097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:43.039834 containerd[1463]: time="2025-11-08T00:20:43.039604416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:43.039834 containerd[1463]: time="2025-11-08T00:20:43.039643196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.039834 containerd[1463]: time="2025-11-08T00:20:43.039738844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:43.057076 systemd[1]: Started cri-containerd-e53047162fc8421132bc5cadb1d79adae34edbfe5ab468a5385aee7e5c119cb4.scope - libcontainer container e53047162fc8421132bc5cadb1d79adae34edbfe5ab468a5385aee7e5c119cb4. Nov 8 00:20:43.080207 systemd[1]: Started cri-containerd-4e49fcba61bae61474f23e376d90bca27405d05ac34e4358d28e68eef19733e6.scope - libcontainer container 4e49fcba61bae61474f23e376d90bca27405d05ac34e4358d28e68eef19733e6. Nov 8 00:20:43.089494 systemd[1]: Started cri-containerd-74b49dbeea8c4930fd1fd64678989138421e036f24e864e51bd5b45976cee8ca.scope - libcontainer container 74b49dbeea8c4930fd1fd64678989138421e036f24e864e51bd5b45976cee8ca. Nov 8 00:20:43.162703 containerd[1463]: time="2025-11-08T00:20:43.161792591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f4234a6c60,Uid:e9033b29b3bebc82f2844b24705a694c,Namespace:kube-system,Attempt:0,} returns sandbox id \"74b49dbeea8c4930fd1fd64678989138421e036f24e864e51bd5b45976cee8ca\"" Nov 8 00:20:43.168781 kubelet[2127]: E1108 00:20:43.168745 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:43.170185 containerd[1463]: time="2025-11-08T00:20:43.169502566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f4234a6c60,Uid:c2e094eddf3b9cae299358274a4a4747,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e49fcba61bae61474f23e376d90bca27405d05ac34e4358d28e68eef19733e6\"" Nov 8 00:20:43.175065 kubelet[2127]: E1108 00:20:43.175033 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:43.182973 containerd[1463]: time="2025-11-08T00:20:43.182805078Z" level=info msg="CreateContainer within sandbox \"4e49fcba61bae61474f23e376d90bca27405d05ac34e4358d28e68eef19733e6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:20:43.183168 containerd[1463]: time="2025-11-08T00:20:43.183055596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f4234a6c60,Uid:a719e2dd4f7bc48ebdfa53718c1b546c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e53047162fc8421132bc5cadb1d79adae34edbfe5ab468a5385aee7e5c119cb4\"" Nov 8 00:20:43.185375 containerd[1463]: time="2025-11-08T00:20:43.185334242Z" level=info msg="CreateContainer within sandbox \"74b49dbeea8c4930fd1fd64678989138421e036f24e864e51bd5b45976cee8ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:20:43.186131 kubelet[2127]: E1108 00:20:43.185850 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:43.190484 containerd[1463]: time="2025-11-08T00:20:43.190448218Z" level=info msg="CreateContainer within sandbox \"e53047162fc8421132bc5cadb1d79adae34edbfe5ab468a5385aee7e5c119cb4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:20:43.198173 containerd[1463]: time="2025-11-08T00:20:43.198030718Z" level=info msg="CreateContainer within sandbox \"74b49dbeea8c4930fd1fd64678989138421e036f24e864e51bd5b45976cee8ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3cbe5109bf9ec9bcb66857ec207bb9c90affe30286c2e6c69b5b66447692a7c8\"" Nov 8 00:20:43.199407 containerd[1463]: time="2025-11-08T00:20:43.199351533Z" level=info msg="StartContainer for \"3cbe5109bf9ec9bcb66857ec207bb9c90affe30286c2e6c69b5b66447692a7c8\"" Nov 8 00:20:43.201904 containerd[1463]: time="2025-11-08T00:20:43.201346221Z" level=info msg="CreateContainer within sandbox \"4e49fcba61bae61474f23e376d90bca27405d05ac34e4358d28e68eef19733e6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be594407c3df93e709f05d1a90ebb10f8631111abab0d9f48a4d443931c91950\"" Nov 8 00:20:43.202388 containerd[1463]: time="2025-11-08T00:20:43.202366298Z" level=info msg="StartContainer for \"be594407c3df93e709f05d1a90ebb10f8631111abab0d9f48a4d443931c91950\"" Nov 8 00:20:43.207268 containerd[1463]: time="2025-11-08T00:20:43.207224449Z" level=info msg="CreateContainer within sandbox \"e53047162fc8421132bc5cadb1d79adae34edbfe5ab468a5385aee7e5c119cb4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1951ee7a3775a4be3b2cc1cab3b88aed052462f928c9f00066bb97ce0d762af5\"" Nov 8 00:20:43.208909 containerd[1463]: time="2025-11-08T00:20:43.208846542Z" level=info msg="StartContainer for \"1951ee7a3775a4be3b2cc1cab3b88aed052462f928c9f00066bb97ce0d762af5\"" Nov 8 00:20:43.253176 systemd[1]: Started cri-containerd-3cbe5109bf9ec9bcb66857ec207bb9c90affe30286c2e6c69b5b66447692a7c8.scope - libcontainer container 3cbe5109bf9ec9bcb66857ec207bb9c90affe30286c2e6c69b5b66447692a7c8. Nov 8 00:20:43.255333 systemd[1]: Started cri-containerd-be594407c3df93e709f05d1a90ebb10f8631111abab0d9f48a4d443931c91950.scope - libcontainer container be594407c3df93e709f05d1a90ebb10f8631111abab0d9f48a4d443931c91950. Nov 8 00:20:43.260518 systemd[1]: Started cri-containerd-1951ee7a3775a4be3b2cc1cab3b88aed052462f928c9f00066bb97ce0d762af5.scope - libcontainer container 1951ee7a3775a4be3b2cc1cab3b88aed052462f928c9f00066bb97ce0d762af5. Nov 8 00:20:43.272775 kubelet[2127]: E1108 00:20:43.272675 2127 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.23.225.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f4234a6c60?timeout=10s\": dial tcp 64.23.225.39:6443: connect: connection refused" interval="1.6s" Nov 8 00:20:43.301582 kubelet[2127]: E1108 00:20:43.301428 2127 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://64.23.225.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f4234a6c60&limit=500&resourceVersion=0\": dial tcp 64.23.225.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:20:43.354698 containerd[1463]: time="2025-11-08T00:20:43.354460540Z" level=info msg="StartContainer for \"3cbe5109bf9ec9bcb66857ec207bb9c90affe30286c2e6c69b5b66447692a7c8\" returns successfully" Nov 8 00:20:43.374779 containerd[1463]: time="2025-11-08T00:20:43.374190064Z" level=info msg="StartContainer for \"be594407c3df93e709f05d1a90ebb10f8631111abab0d9f48a4d443931c91950\" returns successfully" Nov 8 00:20:43.400041 containerd[1463]: time="2025-11-08T00:20:43.399968856Z" level=info msg="StartContainer for \"1951ee7a3775a4be3b2cc1cab3b88aed052462f928c9f00066bb97ce0d762af5\" returns successfully" Nov 8 00:20:43.461559 kubelet[2127]: I1108 00:20:43.461169 2127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:43.462213 kubelet[2127]: E1108 00:20:43.462174 2127 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://64.23.225.39:6443/api/v1/nodes\": dial tcp 64.23.225.39:6443: connect: connection refused" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:43.926742 kubelet[2127]: E1108 00:20:43.926497 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:43.927857 kubelet[2127]: E1108 00:20:43.927475 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:43.928940 kubelet[2127]: E1108 00:20:43.928916 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:43.929162 kubelet[2127]: E1108 00:20:43.929066 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:43.932399 kubelet[2127]: E1108 00:20:43.932163 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:43.932399 kubelet[2127]: E1108 00:20:43.932310 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:44.935142 kubelet[2127]: E1108 00:20:44.934793 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:44.935142 kubelet[2127]: E1108 00:20:44.934971 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:44.937253 kubelet[2127]: E1108 00:20:44.937013 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:44.937253 kubelet[2127]: E1108 00:20:44.937175 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:45.007911 kubelet[2127]: E1108 00:20:45.006826 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:45.008313 kubelet[2127]: E1108 00:20:45.008293 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:45.063917 kubelet[2127]: I1108 00:20:45.063384 2127 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:45.936938 kubelet[2127]: E1108 00:20:45.936826 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:45.938321 kubelet[2127]: E1108 00:20:45.937920 2127 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:45.938321 kubelet[2127]: E1108 00:20:45.938050 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:45.938321 kubelet[2127]: E1108 00:20:45.938252 2127 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:46.120778 kubelet[2127]: E1108 00:20:46.120722 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-f4234a6c60\" not found" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.290497 kubelet[2127]: I1108 00:20:46.290327 2127 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.371691 kubelet[2127]: I1108 00:20:46.371620 2127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.400737 kubelet[2127]: E1108 00:20:46.400675 2127 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f4234a6c60\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.400737 kubelet[2127]: I1108 00:20:46.400734 2127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.403900 kubelet[2127]: E1108 00:20:46.402966 2127 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.403900 kubelet[2127]: I1108 00:20:46.403003 2127 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.415701 kubelet[2127]: E1108 00:20:46.415644 2127 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:46.845779 kubelet[2127]: I1108 00:20:46.845435 2127 apiserver.go:52] "Watching apiserver" Nov 8 00:20:46.865702 kubelet[2127]: I1108 00:20:46.865645 2127 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:20:48.560509 systemd[1]: Reloading requested from client PID 2411 ('systemctl') (unit session-7.scope)... Nov 8 00:20:48.560530 systemd[1]: Reloading... Nov 8 00:20:48.658936 zram_generator::config[2450]: No configuration found. Nov 8 00:20:48.838455 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:20:48.933705 systemd[1]: Reloading finished in 372 ms. Nov 8 00:20:48.980818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:48.999705 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:20:49.000016 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:49.000093 systemd[1]: kubelet.service: Consumed 1.186s CPU time, 127.4M memory peak, 0B memory swap peak. Nov 8 00:20:49.008324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:20:49.169790 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:20:49.183490 (kubelet)[2501]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:20:49.260297 kubelet[2501]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:49.260297 kubelet[2501]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:20:49.260297 kubelet[2501]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:20:49.261901 kubelet[2501]: I1108 00:20:49.260786 2501 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:20:49.273602 kubelet[2501]: I1108 00:20:49.273550 2501 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:20:49.273602 kubelet[2501]: I1108 00:20:49.273585 2501 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:20:49.273904 kubelet[2501]: I1108 00:20:49.273848 2501 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:20:49.277933 kubelet[2501]: I1108 00:20:49.277744 2501 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:20:49.285896 kubelet[2501]: I1108 00:20:49.285488 2501 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:20:49.297556 kubelet[2501]: E1108 00:20:49.297506 2501 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:20:49.297899 kubelet[2501]: I1108 00:20:49.297855 2501 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:20:49.302410 kubelet[2501]: I1108 00:20:49.302350 2501 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:20:49.302738 kubelet[2501]: I1108 00:20:49.302702 2501 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:20:49.303555 kubelet[2501]: I1108 00:20:49.302737 2501 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f4234a6c60","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:20:49.304972 kubelet[2501]: I1108 00:20:49.304941 2501 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:20:49.305162 kubelet[2501]: I1108 00:20:49.305136 2501 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:20:49.306765 kubelet[2501]: I1108 00:20:49.306736 2501 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:49.307463 kubelet[2501]: I1108 00:20:49.307273 2501 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:20:49.307463 kubelet[2501]: I1108 00:20:49.307319 2501 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:20:49.308247 kubelet[2501]: I1108 00:20:49.308099 2501 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:20:49.308247 kubelet[2501]: I1108 00:20:49.308141 2501 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:20:49.317899 kubelet[2501]: I1108 00:20:49.313435 2501 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:20:49.317899 kubelet[2501]: I1108 00:20:49.314193 2501 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:20:49.318114 kubelet[2501]: I1108 00:20:49.317942 2501 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:20:49.318114 kubelet[2501]: I1108 00:20:49.318002 2501 server.go:1289] "Started kubelet" Nov 8 00:20:49.322067 kubelet[2501]: I1108 00:20:49.321965 2501 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:20:49.338514 kubelet[2501]: I1108 00:20:49.337221 2501 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:20:49.338514 kubelet[2501]: I1108 00:20:49.338280 2501 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:20:49.351487 kubelet[2501]: I1108 00:20:49.351400 2501 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:20:49.351860 kubelet[2501]: I1108 00:20:49.351830 2501 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:20:49.352300 kubelet[2501]: I1108 00:20:49.352256 2501 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:20:49.355408 kubelet[2501]: I1108 00:20:49.355371 2501 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:20:49.355666 kubelet[2501]: E1108 00:20:49.355636 2501 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f4234a6c60\" not found" Nov 8 00:20:49.358248 kubelet[2501]: I1108 00:20:49.358192 2501 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:20:49.359363 kubelet[2501]: I1108 00:20:49.358336 2501 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:20:49.360608 kubelet[2501]: I1108 00:20:49.360570 2501 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:20:49.360762 kubelet[2501]: I1108 00:20:49.360704 2501 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:20:49.367930 kubelet[2501]: I1108 00:20:49.366484 2501 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:20:49.367930 kubelet[2501]: E1108 00:20:49.366778 2501 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:20:49.371294 kubelet[2501]: I1108 00:20:49.371209 2501 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:20:49.371294 kubelet[2501]: I1108 00:20:49.371241 2501 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:20:49.371294 kubelet[2501]: I1108 00:20:49.371265 2501 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:20:49.371294 kubelet[2501]: I1108 00:20:49.371272 2501 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:20:49.371294 kubelet[2501]: E1108 00:20:49.371317 2501 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:20:49.373304 kubelet[2501]: I1108 00:20:49.373276 2501 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:20:49.434658 kubelet[2501]: I1108 00:20:49.434531 2501 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:20:49.434860 kubelet[2501]: I1108 00:20:49.434824 2501 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:20:49.434971 kubelet[2501]: I1108 00:20:49.434961 2501 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:20:49.435233 kubelet[2501]: I1108 00:20:49.435208 2501 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:20:49.435345 kubelet[2501]: I1108 00:20:49.435317 2501 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:20:49.435501 kubelet[2501]: I1108 00:20:49.435490 2501 policy_none.go:49] "None policy: Start" Nov 8 00:20:49.435634 kubelet[2501]: I1108 00:20:49.435622 2501 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:20:49.435737 kubelet[2501]: I1108 00:20:49.435728 2501 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:20:49.436022 kubelet[2501]: I1108 00:20:49.436006 2501 state_mem.go:75] "Updated machine memory state" Nov 8 00:20:49.440746 kubelet[2501]: E1108 00:20:49.440714 2501 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:20:49.442506 kubelet[2501]: I1108 00:20:49.441499 2501 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:20:49.442506 kubelet[2501]: I1108 00:20:49.441521 2501 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:20:49.442506 kubelet[2501]: I1108 00:20:49.441934 2501 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:20:49.444967 kubelet[2501]: E1108 00:20:49.444941 2501 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:20:49.472368 kubelet[2501]: I1108 00:20:49.472316 2501 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.472692 kubelet[2501]: I1108 00:20:49.472530 2501 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.472862 kubelet[2501]: I1108 00:20:49.472317 2501 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.480930 kubelet[2501]: I1108 00:20:49.480895 2501 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:20:49.484478 kubelet[2501]: I1108 00:20:49.483923 2501 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:20:49.484890 kubelet[2501]: I1108 00:20:49.483455 2501 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:20:49.542733 kubelet[2501]: I1108 00:20:49.542704 2501 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.556164 kubelet[2501]: I1108 00:20:49.555204 2501 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.556164 kubelet[2501]: I1108 00:20:49.555295 2501 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.560801 kubelet[2501]: I1108 00:20:49.560603 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.560801 kubelet[2501]: I1108 00:20:49.560666 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a719e2dd4f7bc48ebdfa53718c1b546c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f4234a6c60\" (UID: \"a719e2dd4f7bc48ebdfa53718c1b546c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.560801 kubelet[2501]: I1108 00:20:49.560706 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.560801 kubelet[2501]: I1108 00:20:49.560731 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.560801 kubelet[2501]: I1108 00:20:49.560749 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.562921 kubelet[2501]: I1108 00:20:49.561076 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.562921 kubelet[2501]: I1108 00:20:49.561164 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9033b29b3bebc82f2844b24705a694c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" (UID: \"e9033b29b3bebc82f2844b24705a694c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.562921 kubelet[2501]: I1108 00:20:49.561182 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.562921 kubelet[2501]: I1108 00:20:49.561199 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c2e094eddf3b9cae299358274a4a4747-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f4234a6c60\" (UID: \"c2e094eddf3b9cae299358274a4a4747\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:49.782714 kubelet[2501]: E1108 00:20:49.782367 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:49.786933 kubelet[2501]: E1108 00:20:49.786001 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:49.786933 kubelet[2501]: E1108 00:20:49.786167 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:50.327094 kubelet[2501]: I1108 00:20:50.327031 2501 apiserver.go:52] "Watching apiserver" Nov 8 00:20:50.360923 kubelet[2501]: I1108 00:20:50.360886 2501 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:20:50.408804 kubelet[2501]: I1108 00:20:50.408506 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f4234a6c60" podStartSLOduration=1.4084904520000001 podStartE2EDuration="1.408490452s" podCreationTimestamp="2025-11-08 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:50.381459651 +0000 UTC m=+1.192022756" watchObservedRunningTime="2025-11-08 00:20:50.408490452 +0000 UTC m=+1.219053554" Nov 8 00:20:50.415924 kubelet[2501]: E1108 00:20:50.413252 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:50.415924 kubelet[2501]: I1108 00:20:50.413402 2501 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:50.418021 kubelet[2501]: E1108 00:20:50.417989 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:50.424883 kubelet[2501]: I1108 00:20:50.424802 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f4234a6c60" podStartSLOduration=1.424784318 podStartE2EDuration="1.424784318s" podCreationTimestamp="2025-11-08 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:50.408689026 +0000 UTC m=+1.219252131" watchObservedRunningTime="2025-11-08 00:20:50.424784318 +0000 UTC m=+1.235347417" Nov 8 00:20:50.426130 kubelet[2501]: I1108 00:20:50.425241 2501 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:20:50.426130 kubelet[2501]: E1108 00:20:50.425313 2501 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f4234a6c60\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" Nov 8 00:20:50.426130 kubelet[2501]: E1108 00:20:50.425566 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:50.449815 kubelet[2501]: I1108 00:20:50.449754 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f4234a6c60" podStartSLOduration=1.449733204 podStartE2EDuration="1.449733204s" podCreationTimestamp="2025-11-08 00:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:50.432385345 +0000 UTC m=+1.242948450" watchObservedRunningTime="2025-11-08 00:20:50.449733204 +0000 UTC m=+1.260296287" Nov 8 00:20:51.416544 kubelet[2501]: E1108 00:20:51.416060 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:51.416544 kubelet[2501]: E1108 00:20:51.416269 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:52.418253 kubelet[2501]: E1108 00:20:52.417819 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:54.000929 kubelet[2501]: E1108 00:20:54.000700 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:54.422243 kubelet[2501]: E1108 00:20:54.421758 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:55.525337 kubelet[2501]: I1108 00:20:55.525244 2501 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:20:55.526705 containerd[1463]: time="2025-11-08T00:20:55.526653675Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:20:55.527718 kubelet[2501]: I1108 00:20:55.527328 2501 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:20:56.601380 systemd[1]: Created slice kubepods-besteffort-podb730368d_c604_481e_af47_f444e01d9d72.slice - libcontainer container kubepods-besteffort-podb730368d_c604_481e_af47_f444e01d9d72.slice. Nov 8 00:20:56.607237 kubelet[2501]: I1108 00:20:56.607075 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b730368d-c604-481e-af47-f444e01d9d72-xtables-lock\") pod \"kube-proxy-snwk9\" (UID: \"b730368d-c604-481e-af47-f444e01d9d72\") " pod="kube-system/kube-proxy-snwk9" Nov 8 00:20:56.607237 kubelet[2501]: I1108 00:20:56.607113 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b730368d-c604-481e-af47-f444e01d9d72-lib-modules\") pod \"kube-proxy-snwk9\" (UID: \"b730368d-c604-481e-af47-f444e01d9d72\") " pod="kube-system/kube-proxy-snwk9" Nov 8 00:20:56.607237 kubelet[2501]: I1108 00:20:56.607133 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsn7l\" (UniqueName: \"kubernetes.io/projected/b730368d-c604-481e-af47-f444e01d9d72-kube-api-access-zsn7l\") pod \"kube-proxy-snwk9\" (UID: \"b730368d-c604-481e-af47-f444e01d9d72\") " pod="kube-system/kube-proxy-snwk9" Nov 8 00:20:56.607237 kubelet[2501]: I1108 00:20:56.607163 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b730368d-c604-481e-af47-f444e01d9d72-kube-proxy\") pod \"kube-proxy-snwk9\" (UID: \"b730368d-c604-481e-af47-f444e01d9d72\") " pod="kube-system/kube-proxy-snwk9" Nov 8 00:20:56.785311 systemd[1]: Created slice kubepods-besteffort-pod3b19c9c2_caa6_4868_832e_8239d6493b6f.slice - libcontainer container kubepods-besteffort-pod3b19c9c2_caa6_4868_832e_8239d6493b6f.slice. Nov 8 00:20:56.808902 kubelet[2501]: I1108 00:20:56.808801 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rhb\" (UniqueName: \"kubernetes.io/projected/3b19c9c2-caa6-4868-832e-8239d6493b6f-kube-api-access-58rhb\") pod \"tigera-operator-7dcd859c48-6wdds\" (UID: \"3b19c9c2-caa6-4868-832e-8239d6493b6f\") " pod="tigera-operator/tigera-operator-7dcd859c48-6wdds" Nov 8 00:20:56.809256 kubelet[2501]: I1108 00:20:56.809217 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3b19c9c2-caa6-4868-832e-8239d6493b6f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6wdds\" (UID: \"3b19c9c2-caa6-4868-832e-8239d6493b6f\") " pod="tigera-operator/tigera-operator-7dcd859c48-6wdds" Nov 8 00:20:56.911352 kubelet[2501]: E1108 00:20:56.910504 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:56.913350 containerd[1463]: time="2025-11-08T00:20:56.913210363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snwk9,Uid:b730368d-c604-481e-af47-f444e01d9d72,Namespace:kube-system,Attempt:0,}" Nov 8 00:20:56.958269 containerd[1463]: time="2025-11-08T00:20:56.957735895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:56.958269 containerd[1463]: time="2025-11-08T00:20:56.957833732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:56.958269 containerd[1463]: time="2025-11-08T00:20:56.957853840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:56.958269 containerd[1463]: time="2025-11-08T00:20:56.958009580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:57.005229 systemd[1]: Started cri-containerd-27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef.scope - libcontainer container 27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef. Nov 8 00:20:57.037375 containerd[1463]: time="2025-11-08T00:20:57.037315613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snwk9,Uid:b730368d-c604-481e-af47-f444e01d9d72,Namespace:kube-system,Attempt:0,} returns sandbox id \"27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef\"" Nov 8 00:20:57.039063 kubelet[2501]: E1108 00:20:57.038736 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:57.043325 containerd[1463]: time="2025-11-08T00:20:57.043141484Z" level=info msg="CreateContainer within sandbox \"27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:20:57.056993 containerd[1463]: time="2025-11-08T00:20:57.056326131Z" level=info msg="CreateContainer within sandbox \"27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"52a4ebd5bb82a81269cc6675873387834e1df4aca76598a16595c17e57a9e512\"" Nov 8 00:20:57.058778 containerd[1463]: time="2025-11-08T00:20:57.057963317Z" level=info msg="StartContainer for \"52a4ebd5bb82a81269cc6675873387834e1df4aca76598a16595c17e57a9e512\"" Nov 8 00:20:57.093147 containerd[1463]: time="2025-11-08T00:20:57.093095109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6wdds,Uid:3b19c9c2-caa6-4868-832e-8239d6493b6f,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:20:57.096345 systemd[1]: Started cri-containerd-52a4ebd5bb82a81269cc6675873387834e1df4aca76598a16595c17e57a9e512.scope - libcontainer container 52a4ebd5bb82a81269cc6675873387834e1df4aca76598a16595c17e57a9e512. Nov 8 00:20:57.151269 containerd[1463]: time="2025-11-08T00:20:57.150327894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:20:57.151269 containerd[1463]: time="2025-11-08T00:20:57.150423078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:20:57.151269 containerd[1463]: time="2025-11-08T00:20:57.150447108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:57.151269 containerd[1463]: time="2025-11-08T00:20:57.150578900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:20:57.160480 containerd[1463]: time="2025-11-08T00:20:57.160421533Z" level=info msg="StartContainer for \"52a4ebd5bb82a81269cc6675873387834e1df4aca76598a16595c17e57a9e512\" returns successfully" Nov 8 00:20:57.183101 systemd[1]: Started cri-containerd-8cf33a927740b5f8544327c4b2d7950d16f5bb6068c02ace3b8fa73f47a8a2dc.scope - libcontainer container 8cf33a927740b5f8544327c4b2d7950d16f5bb6068c02ace3b8fa73f47a8a2dc. Nov 8 00:20:57.236599 containerd[1463]: time="2025-11-08T00:20:57.236548069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6wdds,Uid:3b19c9c2-caa6-4868-832e-8239d6493b6f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8cf33a927740b5f8544327c4b2d7950d16f5bb6068c02ace3b8fa73f47a8a2dc\"" Nov 8 00:20:57.239108 containerd[1463]: time="2025-11-08T00:20:57.239072217Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:20:57.432646 kubelet[2501]: E1108 00:20:57.432603 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:57.729970 systemd[1]: run-containerd-runc-k8s.io-27e21a160756b8e8b54b533788d6b29de2fe8a4f31ea22222ab5a1d0b85fdbef-runc.Z0PeO4.mount: Deactivated successfully. Nov 8 00:20:58.965569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673421805.mount: Deactivated successfully. Nov 8 00:20:59.428643 kubelet[2501]: E1108 00:20:59.428600 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:59.441814 kubelet[2501]: E1108 00:20:59.441771 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:20:59.472946 kubelet[2501]: I1108 00:20:59.472892 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snwk9" podStartSLOduration=3.472852308 podStartE2EDuration="3.472852308s" podCreationTimestamp="2025-11-08 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:20:57.446085501 +0000 UTC m=+8.256648605" watchObservedRunningTime="2025-11-08 00:20:59.472852308 +0000 UTC m=+10.283415404" Nov 8 00:21:00.955027 containerd[1463]: time="2025-11-08T00:21:00.954219736Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:00.956537 containerd[1463]: time="2025-11-08T00:21:00.956182708Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:21:00.957181 containerd[1463]: time="2025-11-08T00:21:00.957135296Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:00.959943 containerd[1463]: time="2025-11-08T00:21:00.959902861Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:00.961134 containerd[1463]: time="2025-11-08T00:21:00.961094811Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.721981811s" Nov 8 00:21:00.961134 containerd[1463]: time="2025-11-08T00:21:00.961135405Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:21:00.966414 containerd[1463]: time="2025-11-08T00:21:00.966362674Z" level=info msg="CreateContainer within sandbox \"8cf33a927740b5f8544327c4b2d7950d16f5bb6068c02ace3b8fa73f47a8a2dc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:21:00.983154 containerd[1463]: time="2025-11-08T00:21:00.982981794Z" level=info msg="CreateContainer within sandbox \"8cf33a927740b5f8544327c4b2d7950d16f5bb6068c02ace3b8fa73f47a8a2dc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"15cd13748cbc6c41ee02df78b882c9c2a14847b6d11afa3b6f030b42299d7790\"" Nov 8 00:21:00.984404 containerd[1463]: time="2025-11-08T00:21:00.984194465Z" level=info msg="StartContainer for \"15cd13748cbc6c41ee02df78b882c9c2a14847b6d11afa3b6f030b42299d7790\"" Nov 8 00:21:01.031258 systemd[1]: Started cri-containerd-15cd13748cbc6c41ee02df78b882c9c2a14847b6d11afa3b6f030b42299d7790.scope - libcontainer container 15cd13748cbc6c41ee02df78b882c9c2a14847b6d11afa3b6f030b42299d7790. Nov 8 00:21:01.070340 containerd[1463]: time="2025-11-08T00:21:01.070285166Z" level=info msg="StartContainer for \"15cd13748cbc6c41ee02df78b882c9c2a14847b6d11afa3b6f030b42299d7790\" returns successfully" Nov 8 00:21:02.124417 kubelet[2501]: E1108 00:21:02.124015 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:02.141656 kubelet[2501]: I1108 00:21:02.141424 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6wdds" podStartSLOduration=2.416455753 podStartE2EDuration="6.141395437s" podCreationTimestamp="2025-11-08 00:20:56 +0000 UTC" firstStartedPulling="2025-11-08 00:20:57.238221282 +0000 UTC m=+8.048784378" lastFinishedPulling="2025-11-08 00:21:00.963160978 +0000 UTC m=+11.773724062" observedRunningTime="2025-11-08 00:21:01.472036311 +0000 UTC m=+12.282599418" watchObservedRunningTime="2025-11-08 00:21:02.141395437 +0000 UTC m=+12.951958549" Nov 8 00:21:03.238132 update_engine[1443]: I20251108 00:21:03.237995 1443 update_attempter.cc:509] Updating boot flags... Nov 8 00:21:03.298429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2852) Nov 8 00:21:03.415628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2852) Nov 8 00:21:06.584729 sudo[1644]: pam_unix(sudo:session): session closed for user root Nov 8 00:21:06.593130 sshd[1641]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:06.600322 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:21:06.601990 systemd[1]: sshd@6-64.23.225.39:22-139.178.68.195:33936.service: Deactivated successfully. Nov 8 00:21:06.606516 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:21:06.606982 systemd[1]: session-7.scope: Consumed 6.132s CPU time, 145.7M memory peak, 0B memory swap peak. Nov 8 00:21:06.610173 systemd-logind[1440]: Removed session 7. Nov 8 00:21:13.644565 systemd[1]: Created slice kubepods-besteffort-pod8410e519_3413_45a3_b95d_3890ea8bc9cc.slice - libcontainer container kubepods-besteffort-pod8410e519_3413_45a3_b95d_3890ea8bc9cc.slice. Nov 8 00:21:13.731607 kubelet[2501]: I1108 00:21:13.731533 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnb8d\" (UniqueName: \"kubernetes.io/projected/8410e519-3413-45a3-b95d-3890ea8bc9cc-kube-api-access-xnb8d\") pod \"calico-typha-549fd7cd4d-l5wqp\" (UID: \"8410e519-3413-45a3-b95d-3890ea8bc9cc\") " pod="calico-system/calico-typha-549fd7cd4d-l5wqp" Nov 8 00:21:13.731607 kubelet[2501]: I1108 00:21:13.731608 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8410e519-3413-45a3-b95d-3890ea8bc9cc-typha-certs\") pod \"calico-typha-549fd7cd4d-l5wqp\" (UID: \"8410e519-3413-45a3-b95d-3890ea8bc9cc\") " pod="calico-system/calico-typha-549fd7cd4d-l5wqp" Nov 8 00:21:13.732238 kubelet[2501]: I1108 00:21:13.731662 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8410e519-3413-45a3-b95d-3890ea8bc9cc-tigera-ca-bundle\") pod \"calico-typha-549fd7cd4d-l5wqp\" (UID: \"8410e519-3413-45a3-b95d-3890ea8bc9cc\") " pod="calico-system/calico-typha-549fd7cd4d-l5wqp" Nov 8 00:21:13.880941 systemd[1]: Created slice kubepods-besteffort-poda6e27ad7_ea0d_469b_8baf_6f2be276a975.slice - libcontainer container kubepods-besteffort-poda6e27ad7_ea0d_469b_8baf_6f2be276a975.slice. Nov 8 00:21:13.933144 kubelet[2501]: I1108 00:21:13.932708 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-lib-modules\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933144 kubelet[2501]: I1108 00:21:13.932756 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-cni-net-dir\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933144 kubelet[2501]: I1108 00:21:13.932772 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-var-run-calico\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933144 kubelet[2501]: I1108 00:21:13.932788 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-cni-bin-dir\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933144 kubelet[2501]: I1108 00:21:13.932803 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6e27ad7-ea0d-469b-8baf-6f2be276a975-tigera-ca-bundle\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933411 kubelet[2501]: I1108 00:21:13.932822 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-flexvol-driver-host\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933411 kubelet[2501]: I1108 00:21:13.932841 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a6e27ad7-ea0d-469b-8baf-6f2be276a975-node-certs\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933411 kubelet[2501]: I1108 00:21:13.932884 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-policysync\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933411 kubelet[2501]: I1108 00:21:13.932901 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-var-lib-calico\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933411 kubelet[2501]: I1108 00:21:13.932917 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-xtables-lock\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933578 kubelet[2501]: I1108 00:21:13.932943 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t49mr\" (UniqueName: \"kubernetes.io/projected/a6e27ad7-ea0d-469b-8baf-6f2be276a975-kube-api-access-t49mr\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.933578 kubelet[2501]: I1108 00:21:13.932962 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a6e27ad7-ea0d-469b-8baf-6f2be276a975-cni-log-dir\") pod \"calico-node-f8dzs\" (UID: \"a6e27ad7-ea0d-469b-8baf-6f2be276a975\") " pod="calico-system/calico-node-f8dzs" Nov 8 00:21:13.952701 kubelet[2501]: E1108 00:21:13.952660 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:13.954036 containerd[1463]: time="2025-11-08T00:21:13.953581358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549fd7cd4d-l5wqp,Uid:8410e519-3413-45a3-b95d-3890ea8bc9cc,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:13.998677 containerd[1463]: time="2025-11-08T00:21:13.998215266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:13.998677 containerd[1463]: time="2025-11-08T00:21:13.998287780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:13.998677 containerd[1463]: time="2025-11-08T00:21:13.998325511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:13.998677 containerd[1463]: time="2025-11-08T00:21:13.998463481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:14.048469 kubelet[2501]: E1108 00:21:14.048423 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.048698 kubelet[2501]: W1108 00:21:14.048654 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.048765 kubelet[2501]: E1108 00:21:14.048694 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.052152 kubelet[2501]: E1108 00:21:14.052060 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.052152 kubelet[2501]: W1108 00:21:14.052082 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.052152 kubelet[2501]: E1108 00:21:14.052104 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.060152 systemd[1]: Started cri-containerd-c6f8bc96cc4a57b060eff5c2809a02da02d979b6c3fa5b52cca912aa949c725b.scope - libcontainer container c6f8bc96cc4a57b060eff5c2809a02da02d979b6c3fa5b52cca912aa949c725b. Nov 8 00:21:14.062617 kubelet[2501]: E1108 00:21:14.062593 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.062749 kubelet[2501]: W1108 00:21:14.062735 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.062812 kubelet[2501]: E1108 00:21:14.062802 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.114543 kubelet[2501]: E1108 00:21:14.114371 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:14.135638 kubelet[2501]: E1108 00:21:14.135206 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.135638 kubelet[2501]: W1108 00:21:14.135237 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.135638 kubelet[2501]: E1108 00:21:14.135268 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.135638 kubelet[2501]: E1108 00:21:14.135579 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.135638 kubelet[2501]: W1108 00:21:14.135591 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.135638 kubelet[2501]: E1108 00:21:14.135605 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.136174 kubelet[2501]: E1108 00:21:14.135852 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.136174 kubelet[2501]: W1108 00:21:14.135864 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.136174 kubelet[2501]: E1108 00:21:14.135892 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.136761 kubelet[2501]: E1108 00:21:14.136732 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.136761 kubelet[2501]: W1108 00:21:14.136749 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.136761 kubelet[2501]: E1108 00:21:14.136763 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.137218 kubelet[2501]: E1108 00:21:14.137201 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.137218 kubelet[2501]: W1108 00:21:14.137214 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.137605 kubelet[2501]: E1108 00:21:14.137225 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.137804 kubelet[2501]: E1108 00:21:14.137773 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.137804 kubelet[2501]: W1108 00:21:14.137793 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.138162 kubelet[2501]: E1108 00:21:14.137805 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.138283 kubelet[2501]: E1108 00:21:14.138261 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.138283 kubelet[2501]: W1108 00:21:14.138272 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.138283 kubelet[2501]: E1108 00:21:14.138283 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.139114 kubelet[2501]: E1108 00:21:14.139089 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.139114 kubelet[2501]: W1108 00:21:14.139102 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.139114 kubelet[2501]: E1108 00:21:14.139114 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.139828 kubelet[2501]: E1108 00:21:14.139811 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.139828 kubelet[2501]: W1108 00:21:14.139824 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.139989 kubelet[2501]: E1108 00:21:14.139837 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.141338 kubelet[2501]: E1108 00:21:14.141289 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.141687 kubelet[2501]: W1108 00:21:14.141456 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.141687 kubelet[2501]: E1108 00:21:14.141505 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.142446 kubelet[2501]: E1108 00:21:14.142158 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.142446 kubelet[2501]: W1108 00:21:14.142170 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.142446 kubelet[2501]: E1108 00:21:14.142184 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.143363 kubelet[2501]: E1108 00:21:14.142760 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.143363 kubelet[2501]: W1108 00:21:14.142775 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.143363 kubelet[2501]: E1108 00:21:14.142790 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.143534 kubelet[2501]: E1108 00:21:14.143392 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.143534 kubelet[2501]: W1108 00:21:14.143407 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.143534 kubelet[2501]: E1108 00:21:14.143421 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.143534 kubelet[2501]: I1108 00:21:14.143448 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71de4983-7c24-4272-8fa7-0a4b5407d2c0-registration-dir\") pod \"csi-node-driver-ldqsl\" (UID: \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\") " pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:14.143713 kubelet[2501]: E1108 00:21:14.143691 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.143713 kubelet[2501]: W1108 00:21:14.143707 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.143799 kubelet[2501]: E1108 00:21:14.143719 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.143799 kubelet[2501]: I1108 00:21:14.143745 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71de4983-7c24-4272-8fa7-0a4b5407d2c0-kubelet-dir\") pod \"csi-node-driver-ldqsl\" (UID: \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\") " pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:14.144048 kubelet[2501]: E1108 00:21:14.144023 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.144110 kubelet[2501]: W1108 00:21:14.144048 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.144110 kubelet[2501]: E1108 00:21:14.144061 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.144110 kubelet[2501]: I1108 00:21:14.144095 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71de4983-7c24-4272-8fa7-0a4b5407d2c0-socket-dir\") pod \"csi-node-driver-ldqsl\" (UID: \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\") " pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:14.144385 kubelet[2501]: E1108 00:21:14.144369 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.144385 kubelet[2501]: W1108 00:21:14.144380 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.144445 kubelet[2501]: E1108 00:21:14.144392 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.144703 kubelet[2501]: E1108 00:21:14.144655 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.144703 kubelet[2501]: W1108 00:21:14.144667 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.144703 kubelet[2501]: E1108 00:21:14.144681 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.145000 kubelet[2501]: E1108 00:21:14.144986 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.145000 kubelet[2501]: W1108 00:21:14.144998 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.145186 kubelet[2501]: E1108 00:21:14.145012 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.145412 kubelet[2501]: E1108 00:21:14.145239 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.145412 kubelet[2501]: W1108 00:21:14.145248 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.145412 kubelet[2501]: E1108 00:21:14.145259 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.145888 kubelet[2501]: E1108 00:21:14.145545 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.145888 kubelet[2501]: W1108 00:21:14.145560 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.145888 kubelet[2501]: E1108 00:21:14.145585 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.145888 kubelet[2501]: E1108 00:21:14.145857 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.145888 kubelet[2501]: W1108 00:21:14.145886 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.146518 kubelet[2501]: E1108 00:21:14.145898 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.146518 kubelet[2501]: E1108 00:21:14.146128 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.146518 kubelet[2501]: W1108 00:21:14.146138 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.146518 kubelet[2501]: E1108 00:21:14.146148 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.146518 kubelet[2501]: E1108 00:21:14.146355 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.146518 kubelet[2501]: W1108 00:21:14.146364 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.146518 kubelet[2501]: E1108 00:21:14.146374 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.146587 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.147414 kubelet[2501]: W1108 00:21:14.146596 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.146607 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.146845 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.147414 kubelet[2501]: W1108 00:21:14.146858 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.146940 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.147220 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.147414 kubelet[2501]: W1108 00:21:14.147231 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.147242 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.147414 kubelet[2501]: E1108 00:21:14.147402 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.148239 kubelet[2501]: W1108 00:21:14.147410 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.148239 kubelet[2501]: E1108 00:21:14.147420 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.148239 kubelet[2501]: E1108 00:21:14.147559 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.148239 kubelet[2501]: W1108 00:21:14.147566 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.148239 kubelet[2501]: E1108 00:21:14.147573 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.148239 kubelet[2501]: E1108 00:21:14.147758 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.148239 kubelet[2501]: W1108 00:21:14.147767 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.148239 kubelet[2501]: E1108 00:21:14.147776 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.186182 kubelet[2501]: E1108 00:21:14.186068 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:14.189888 containerd[1463]: time="2025-11-08T00:21:14.187568509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8dzs,Uid:a6e27ad7-ea0d-469b-8baf-6f2be276a975,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:14.226432 containerd[1463]: time="2025-11-08T00:21:14.226264654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:14.226432 containerd[1463]: time="2025-11-08T00:21:14.226322698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:14.226432 containerd[1463]: time="2025-11-08T00:21:14.226339174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:14.228899 containerd[1463]: time="2025-11-08T00:21:14.228190104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:14.250199 kubelet[2501]: E1108 00:21:14.250057 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.250199 kubelet[2501]: W1108 00:21:14.250092 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.250199 kubelet[2501]: E1108 00:21:14.250137 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.252252 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.254994 kubelet[2501]: W1108 00:21:14.252281 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.252307 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.253413 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.254994 kubelet[2501]: W1108 00:21:14.253437 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.253468 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.255115 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.254994 kubelet[2501]: W1108 00:21:14.255139 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.254994 kubelet[2501]: E1108 00:21:14.255165 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.257674 kubelet[2501]: E1108 00:21:14.256548 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.257674 kubelet[2501]: W1108 00:21:14.256585 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.257674 kubelet[2501]: E1108 00:21:14.256613 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.257674 kubelet[2501]: I1108 00:21:14.256755 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71de4983-7c24-4272-8fa7-0a4b5407d2c0-varrun\") pod \"csi-node-driver-ldqsl\" (UID: \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\") " pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:14.259535 kubelet[2501]: E1108 00:21:14.258820 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.259535 kubelet[2501]: W1108 00:21:14.258918 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.259535 kubelet[2501]: E1108 00:21:14.258947 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.262046 kubelet[2501]: E1108 00:21:14.260688 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.262046 kubelet[2501]: W1108 00:21:14.260714 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.262046 kubelet[2501]: E1108 00:21:14.260744 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.263794 kubelet[2501]: E1108 00:21:14.263093 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.263794 kubelet[2501]: W1108 00:21:14.263127 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.263794 kubelet[2501]: E1108 00:21:14.263158 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.263794 kubelet[2501]: I1108 00:21:14.263244 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwkr\" (UniqueName: \"kubernetes.io/projected/71de4983-7c24-4272-8fa7-0a4b5407d2c0-kube-api-access-qpwkr\") pod \"csi-node-driver-ldqsl\" (UID: \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\") " pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:14.265532 kubelet[2501]: E1108 00:21:14.264756 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.265532 kubelet[2501]: W1108 00:21:14.264782 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.265532 kubelet[2501]: E1108 00:21:14.264808 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.267086 kubelet[2501]: E1108 00:21:14.266348 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.267086 kubelet[2501]: W1108 00:21:14.266377 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.267086 kubelet[2501]: E1108 00:21:14.266407 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.268903 kubelet[2501]: E1108 00:21:14.268391 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.268903 kubelet[2501]: W1108 00:21:14.268418 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.268903 kubelet[2501]: E1108 00:21:14.268446 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.270823 kubelet[2501]: E1108 00:21:14.270274 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.270823 kubelet[2501]: W1108 00:21:14.270304 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.270823 kubelet[2501]: E1108 00:21:14.270330 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.272576 kubelet[2501]: E1108 00:21:14.271992 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.272576 kubelet[2501]: W1108 00:21:14.272016 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.272576 kubelet[2501]: E1108 00:21:14.272039 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.272777 kubelet[2501]: E1108 00:21:14.272700 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.272777 kubelet[2501]: W1108 00:21:14.272714 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.272777 kubelet[2501]: E1108 00:21:14.272734 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.274334 kubelet[2501]: E1108 00:21:14.273159 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.274334 kubelet[2501]: W1108 00:21:14.273173 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.274334 kubelet[2501]: E1108 00:21:14.273199 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.274334 kubelet[2501]: E1108 00:21:14.274046 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.274334 kubelet[2501]: W1108 00:21:14.274058 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.274334 kubelet[2501]: E1108 00:21:14.274071 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.274589 kubelet[2501]: E1108 00:21:14.274541 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.274589 kubelet[2501]: W1108 00:21:14.274553 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.274589 kubelet[2501]: E1108 00:21:14.274564 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.276203 kubelet[2501]: E1108 00:21:14.276133 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.276203 kubelet[2501]: W1108 00:21:14.276150 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.276203 kubelet[2501]: E1108 00:21:14.276165 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.276543 kubelet[2501]: E1108 00:21:14.276387 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.276543 kubelet[2501]: W1108 00:21:14.276400 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.276543 kubelet[2501]: E1108 00:21:14.276409 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.276787 kubelet[2501]: E1108 00:21:14.276690 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.276787 kubelet[2501]: W1108 00:21:14.276704 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.276787 kubelet[2501]: E1108 00:21:14.276714 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.277716 kubelet[2501]: E1108 00:21:14.277563 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.277716 kubelet[2501]: W1108 00:21:14.277579 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.277716 kubelet[2501]: E1108 00:21:14.277592 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.279472 containerd[1463]: time="2025-11-08T00:21:14.277625172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549fd7cd4d-l5wqp,Uid:8410e519-3413-45a3-b95d-3890ea8bc9cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6f8bc96cc4a57b060eff5c2809a02da02d979b6c3fa5b52cca912aa949c725b\"" Nov 8 00:21:14.281595 kubelet[2501]: E1108 00:21:14.281505 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:14.284024 containerd[1463]: time="2025-11-08T00:21:14.283922862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:21:14.307856 systemd[1]: Started cri-containerd-1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684.scope - libcontainer container 1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684. Nov 8 00:21:14.365673 containerd[1463]: time="2025-11-08T00:21:14.365626415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f8dzs,Uid:a6e27ad7-ea0d-469b-8baf-6f2be276a975,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\"" Nov 8 00:21:14.366899 kubelet[2501]: E1108 00:21:14.366844 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:14.375946 kubelet[2501]: E1108 00:21:14.375907 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.375946 kubelet[2501]: W1108 00:21:14.375934 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.375946 kubelet[2501]: E1108 00:21:14.375960 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.376716 kubelet[2501]: E1108 00:21:14.376345 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.376716 kubelet[2501]: W1108 00:21:14.376365 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.376716 kubelet[2501]: E1108 00:21:14.376381 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.376902 kubelet[2501]: E1108 00:21:14.376811 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.376902 kubelet[2501]: W1108 00:21:14.376832 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.376902 kubelet[2501]: E1108 00:21:14.376847 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.377180 kubelet[2501]: E1108 00:21:14.377169 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.377180 kubelet[2501]: W1108 00:21:14.377182 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.377383 kubelet[2501]: E1108 00:21:14.377195 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.377536 kubelet[2501]: E1108 00:21:14.377510 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.377573 kubelet[2501]: W1108 00:21:14.377536 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.377573 kubelet[2501]: E1108 00:21:14.377552 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.377931 kubelet[2501]: E1108 00:21:14.377889 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.377931 kubelet[2501]: W1108 00:21:14.377906 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.377931 kubelet[2501]: E1108 00:21:14.377921 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.378346 kubelet[2501]: E1108 00:21:14.378241 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.378346 kubelet[2501]: W1108 00:21:14.378258 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.378346 kubelet[2501]: E1108 00:21:14.378273 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.379683 kubelet[2501]: E1108 00:21:14.379660 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.379683 kubelet[2501]: W1108 00:21:14.379682 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.379785 kubelet[2501]: E1108 00:21:14.379697 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.380006 kubelet[2501]: E1108 00:21:14.379988 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.380159 kubelet[2501]: W1108 00:21:14.380007 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.380159 kubelet[2501]: E1108 00:21:14.380021 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.380488 kubelet[2501]: E1108 00:21:14.380327 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.380488 kubelet[2501]: W1108 00:21:14.380346 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.380488 kubelet[2501]: E1108 00:21:14.380361 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:14.394105 kubelet[2501]: E1108 00:21:14.394070 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:14.394105 kubelet[2501]: W1108 00:21:14.394095 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:14.394275 kubelet[2501]: E1108 00:21:14.394125 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:15.374719 kubelet[2501]: E1108 00:21:15.374078 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:15.792690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4255046091.mount: Deactivated successfully. Nov 8 00:21:16.810767 containerd[1463]: time="2025-11-08T00:21:16.810697977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:16.812957 containerd[1463]: time="2025-11-08T00:21:16.812378769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:21:16.812957 containerd[1463]: time="2025-11-08T00:21:16.812444410Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:16.816539 containerd[1463]: time="2025-11-08T00:21:16.816481864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:16.819471 containerd[1463]: time="2025-11-08T00:21:16.819428484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.535102188s" Nov 8 00:21:16.819757 containerd[1463]: time="2025-11-08T00:21:16.819717718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:21:16.821362 containerd[1463]: time="2025-11-08T00:21:16.821323069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:21:16.843727 containerd[1463]: time="2025-11-08T00:21:16.843674485Z" level=info msg="CreateContainer within sandbox \"c6f8bc96cc4a57b060eff5c2809a02da02d979b6c3fa5b52cca912aa949c725b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:21:16.879862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504768887.mount: Deactivated successfully. Nov 8 00:21:16.913922 containerd[1463]: time="2025-11-08T00:21:16.913167959Z" level=info msg="CreateContainer within sandbox \"c6f8bc96cc4a57b060eff5c2809a02da02d979b6c3fa5b52cca912aa949c725b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d51bb502fd7bab9b6d13f890a9c3cec0f6929a8902e0d27928db542cad7c01e2\"" Nov 8 00:21:16.916710 containerd[1463]: time="2025-11-08T00:21:16.914982258Z" level=info msg="StartContainer for \"d51bb502fd7bab9b6d13f890a9c3cec0f6929a8902e0d27928db542cad7c01e2\"" Nov 8 00:21:16.984241 systemd[1]: Started cri-containerd-d51bb502fd7bab9b6d13f890a9c3cec0f6929a8902e0d27928db542cad7c01e2.scope - libcontainer container d51bb502fd7bab9b6d13f890a9c3cec0f6929a8902e0d27928db542cad7c01e2. Nov 8 00:21:17.051965 containerd[1463]: time="2025-11-08T00:21:17.051908381Z" level=info msg="StartContainer for \"d51bb502fd7bab9b6d13f890a9c3cec0f6929a8902e0d27928db542cad7c01e2\" returns successfully" Nov 8 00:21:17.371977 kubelet[2501]: E1108 00:21:17.371923 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:17.504310 kubelet[2501]: E1108 00:21:17.504269 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:17.552386 kubelet[2501]: I1108 00:21:17.552312 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-549fd7cd4d-l5wqp" podStartSLOduration=2.013888052 podStartE2EDuration="4.552288349s" podCreationTimestamp="2025-11-08 00:21:13 +0000 UTC" firstStartedPulling="2025-11-08 00:21:14.282568375 +0000 UTC m=+25.093131471" lastFinishedPulling="2025-11-08 00:21:16.820968671 +0000 UTC m=+27.631531768" observedRunningTime="2025-11-08 00:21:17.538104643 +0000 UTC m=+28.348667751" watchObservedRunningTime="2025-11-08 00:21:17.552288349 +0000 UTC m=+28.362851454" Nov 8 00:21:17.572791 kubelet[2501]: E1108 00:21:17.572738 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.572791 kubelet[2501]: W1108 00:21:17.572775 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.573059 kubelet[2501]: E1108 00:21:17.572806 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.573208 kubelet[2501]: E1108 00:21:17.573174 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.573208 kubelet[2501]: W1108 00:21:17.573192 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.573369 kubelet[2501]: E1108 00:21:17.573212 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.573484 kubelet[2501]: E1108 00:21:17.573468 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.573562 kubelet[2501]: W1108 00:21:17.573483 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.573562 kubelet[2501]: E1108 00:21:17.573501 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.573770 kubelet[2501]: E1108 00:21:17.573750 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.573770 kubelet[2501]: W1108 00:21:17.573766 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.573880 kubelet[2501]: E1108 00:21:17.573780 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.574094 kubelet[2501]: E1108 00:21:17.574074 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.574094 kubelet[2501]: W1108 00:21:17.574091 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.574419 kubelet[2501]: E1108 00:21:17.574105 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.574419 kubelet[2501]: E1108 00:21:17.574343 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.574419 kubelet[2501]: W1108 00:21:17.574355 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.574419 kubelet[2501]: E1108 00:21:17.574369 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.575208 kubelet[2501]: E1108 00:21:17.574586 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.575208 kubelet[2501]: W1108 00:21:17.574597 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.575208 kubelet[2501]: E1108 00:21:17.574610 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.575208 kubelet[2501]: E1108 00:21:17.574835 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.575208 kubelet[2501]: W1108 00:21:17.574845 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.575208 kubelet[2501]: E1108 00:21:17.574857 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.575915 kubelet[2501]: E1108 00:21:17.575331 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.575915 kubelet[2501]: W1108 00:21:17.575345 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.575915 kubelet[2501]: E1108 00:21:17.575362 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.576338 kubelet[2501]: E1108 00:21:17.576260 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.576338 kubelet[2501]: W1108 00:21:17.576278 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.576338 kubelet[2501]: E1108 00:21:17.576294 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.577612 kubelet[2501]: E1108 00:21:17.577401 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.577612 kubelet[2501]: W1108 00:21:17.577420 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.577612 kubelet[2501]: E1108 00:21:17.577437 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.578090 kubelet[2501]: E1108 00:21:17.577928 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.578090 kubelet[2501]: W1108 00:21:17.577946 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.578090 kubelet[2501]: E1108 00:21:17.577971 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.579468 kubelet[2501]: E1108 00:21:17.579269 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.579468 kubelet[2501]: W1108 00:21:17.579290 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.579468 kubelet[2501]: E1108 00:21:17.579306 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.580275 kubelet[2501]: E1108 00:21:17.580255 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.580275 kubelet[2501]: W1108 00:21:17.580271 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.580275 kubelet[2501]: E1108 00:21:17.580286 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.581060 kubelet[2501]: E1108 00:21:17.581038 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.581060 kubelet[2501]: W1108 00:21:17.581053 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.581156 kubelet[2501]: E1108 00:21:17.581068 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.601590 kubelet[2501]: E1108 00:21:17.601548 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.601590 kubelet[2501]: W1108 00:21:17.601580 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.601906 kubelet[2501]: E1108 00:21:17.601609 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.602009 kubelet[2501]: E1108 00:21:17.601988 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.602096 kubelet[2501]: W1108 00:21:17.602034 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.602096 kubelet[2501]: E1108 00:21:17.602056 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.602420 kubelet[2501]: E1108 00:21:17.602399 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.602420 kubelet[2501]: W1108 00:21:17.602417 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.602549 kubelet[2501]: E1108 00:21:17.602432 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.604010 kubelet[2501]: E1108 00:21:17.603982 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.604010 kubelet[2501]: W1108 00:21:17.604004 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.604199 kubelet[2501]: E1108 00:21:17.604025 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.604370 kubelet[2501]: E1108 00:21:17.604352 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.604431 kubelet[2501]: W1108 00:21:17.604370 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.604431 kubelet[2501]: E1108 00:21:17.604391 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.604718 kubelet[2501]: E1108 00:21:17.604699 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.604718 kubelet[2501]: W1108 00:21:17.604715 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.604915 kubelet[2501]: E1108 00:21:17.604735 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.605055 kubelet[2501]: E1108 00:21:17.605039 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.605055 kubelet[2501]: W1108 00:21:17.605054 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.605163 kubelet[2501]: E1108 00:21:17.605069 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.605752 kubelet[2501]: E1108 00:21:17.605728 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.605752 kubelet[2501]: W1108 00:21:17.605749 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.605941 kubelet[2501]: E1108 00:21:17.605766 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.606176 kubelet[2501]: E1108 00:21:17.606153 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.606176 kubelet[2501]: W1108 00:21:17.606172 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.606759 kubelet[2501]: E1108 00:21:17.606188 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.606759 kubelet[2501]: E1108 00:21:17.606466 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.606759 kubelet[2501]: W1108 00:21:17.606478 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.606759 kubelet[2501]: E1108 00:21:17.606493 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.607559 kubelet[2501]: E1108 00:21:17.607538 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.607559 kubelet[2501]: W1108 00:21:17.607556 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.607701 kubelet[2501]: E1108 00:21:17.607572 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.607860 kubelet[2501]: E1108 00:21:17.607844 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.607860 kubelet[2501]: W1108 00:21:17.607859 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.607998 kubelet[2501]: E1108 00:21:17.607884 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.608174 kubelet[2501]: E1108 00:21:17.608158 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.608174 kubelet[2501]: W1108 00:21:17.608173 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.608290 kubelet[2501]: E1108 00:21:17.608187 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.608491 kubelet[2501]: E1108 00:21:17.608473 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.608491 kubelet[2501]: W1108 00:21:17.608488 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.608607 kubelet[2501]: E1108 00:21:17.608503 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.609536 kubelet[2501]: E1108 00:21:17.609515 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.609536 kubelet[2501]: W1108 00:21:17.609533 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.609674 kubelet[2501]: E1108 00:21:17.609549 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.610431 kubelet[2501]: E1108 00:21:17.610408 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.610431 kubelet[2501]: W1108 00:21:17.610426 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.610575 kubelet[2501]: E1108 00:21:17.610442 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.610745 kubelet[2501]: E1108 00:21:17.610728 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.610745 kubelet[2501]: W1108 00:21:17.610744 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.610848 kubelet[2501]: E1108 00:21:17.610759 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:17.612290 kubelet[2501]: E1108 00:21:17.612252 2501 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:21:17.612290 kubelet[2501]: W1108 00:21:17.612269 2501 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:21:17.612290 kubelet[2501]: E1108 00:21:17.612288 2501 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:21:18.129014 containerd[1463]: time="2025-11-08T00:21:18.128969753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:18.130848 containerd[1463]: time="2025-11-08T00:21:18.130464375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:21:18.148541 containerd[1463]: time="2025-11-08T00:21:18.148414109Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:18.149354 containerd[1463]: time="2025-11-08T00:21:18.149158373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.327619635s" Nov 8 00:21:18.149354 containerd[1463]: time="2025-11-08T00:21:18.149199562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:21:18.150633 containerd[1463]: time="2025-11-08T00:21:18.150542060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:18.156155 containerd[1463]: time="2025-11-08T00:21:18.156107393Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:21:18.173714 containerd[1463]: time="2025-11-08T00:21:18.173391767Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941\"" Nov 8 00:21:18.177787 containerd[1463]: time="2025-11-08T00:21:18.176493781Z" level=info msg="StartContainer for \"6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941\"" Nov 8 00:21:18.249140 systemd[1]: Started cri-containerd-6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941.scope - libcontainer container 6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941. Nov 8 00:21:18.301476 containerd[1463]: time="2025-11-08T00:21:18.301357917Z" level=info msg="StartContainer for \"6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941\" returns successfully" Nov 8 00:21:18.319490 systemd[1]: cri-containerd-6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941.scope: Deactivated successfully. Nov 8 00:21:18.410771 containerd[1463]: time="2025-11-08T00:21:18.384450002Z" level=info msg="shim disconnected" id=6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941 namespace=k8s.io Nov 8 00:21:18.410771 containerd[1463]: time="2025-11-08T00:21:18.409992188Z" level=warning msg="cleaning up after shim disconnected" id=6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941 namespace=k8s.io Nov 8 00:21:18.410771 containerd[1463]: time="2025-11-08T00:21:18.410009990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:18.508504 kubelet[2501]: I1108 00:21:18.508471 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:21:18.510373 kubelet[2501]: E1108 00:21:18.508809 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:18.510373 kubelet[2501]: E1108 00:21:18.508891 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:18.511234 containerd[1463]: time="2025-11-08T00:21:18.510892612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:21:18.832678 systemd[1]: run-containerd-runc-k8s.io-6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941-runc.tITCEI.mount: Deactivated successfully. Nov 8 00:21:18.832818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d0c77852bf6e3e63d51fb1087ddb7958a1fd68ffa201e8b87743fb9e899e941-rootfs.mount: Deactivated successfully. Nov 8 00:21:19.374021 kubelet[2501]: E1108 00:21:19.373199 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:21.372159 kubelet[2501]: E1108 00:21:21.372106 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:21.697323 containerd[1463]: time="2025-11-08T00:21:21.697184164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.698221 containerd[1463]: time="2025-11-08T00:21:21.698052779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:21:21.698902 containerd[1463]: time="2025-11-08T00:21:21.698700167Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.700658 containerd[1463]: time="2025-11-08T00:21:21.700626100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:21.702121 containerd[1463]: time="2025-11-08T00:21:21.702078476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.191144583s" Nov 8 00:21:21.702288 containerd[1463]: time="2025-11-08T00:21:21.702261219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:21:21.708552 containerd[1463]: time="2025-11-08T00:21:21.708126379Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:21:21.746173 containerd[1463]: time="2025-11-08T00:21:21.746107924Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e\"" Nov 8 00:21:21.750533 containerd[1463]: time="2025-11-08T00:21:21.747323701Z" level=info msg="StartContainer for \"f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e\"" Nov 8 00:21:21.797130 systemd[1]: Started cri-containerd-f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e.scope - libcontainer container f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e. Nov 8 00:21:21.850241 containerd[1463]: time="2025-11-08T00:21:21.850169677Z" level=info msg="StartContainer for \"f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e\" returns successfully" Nov 8 00:21:22.411663 systemd[1]: cri-containerd-f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e.scope: Deactivated successfully. Nov 8 00:21:22.467154 containerd[1463]: time="2025-11-08T00:21:22.465189980Z" level=info msg="shim disconnected" id=f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e namespace=k8s.io Nov 8 00:21:22.467154 containerd[1463]: time="2025-11-08T00:21:22.465630726Z" level=warning msg="cleaning up after shim disconnected" id=f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e namespace=k8s.io Nov 8 00:21:22.467154 containerd[1463]: time="2025-11-08T00:21:22.466936218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:21:22.467806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6cee3060c2def7665989e1eb13e46016bba19917806947194d42fc9f5520e8e-rootfs.mount: Deactivated successfully. Nov 8 00:21:22.484792 containerd[1463]: time="2025-11-08T00:21:22.484744983Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:21:22Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:21:22.486210 kubelet[2501]: I1108 00:21:22.486169 2501 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:21:22.528965 kubelet[2501]: E1108 00:21:22.528926 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:22.531978 containerd[1463]: time="2025-11-08T00:21:22.531937368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:21:22.581018 systemd[1]: Created slice kubepods-burstable-podbf32dec8_2497_4f2e_91ee_003d9b7723b4.slice - libcontainer container kubepods-burstable-podbf32dec8_2497_4f2e_91ee_003d9b7723b4.slice. Nov 8 00:21:22.594231 systemd[1]: Created slice kubepods-besteffort-pod10d0265b_9d98_419e_98b8_ef3078177b60.slice - libcontainer container kubepods-besteffort-pod10d0265b_9d98_419e_98b8_ef3078177b60.slice. Nov 8 00:21:22.604593 systemd[1]: Created slice kubepods-burstable-pod25d1e64e_eeda_401a_ad8a_78903d2ff60f.slice - libcontainer container kubepods-burstable-pod25d1e64e_eeda_401a_ad8a_78903d2ff60f.slice. Nov 8 00:21:22.616143 systemd[1]: Created slice kubepods-besteffort-pod117b536c_81bf_4f2f_9f1f_8a64cd38e25c.slice - libcontainer container kubepods-besteffort-pod117b536c_81bf_4f2f_9f1f_8a64cd38e25c.slice. Nov 8 00:21:22.624285 systemd[1]: Created slice kubepods-besteffort-pod329e0556_11fe_424d_9621_4c503891f4c4.slice - libcontainer container kubepods-besteffort-pod329e0556_11fe_424d_9621_4c503891f4c4.slice. Nov 8 00:21:22.632673 systemd[1]: Created slice kubepods-besteffort-pod9b72e8b4_2554_45c8_82a8_87c096020fee.slice - libcontainer container kubepods-besteffort-pod9b72e8b4_2554_45c8_82a8_87c096020fee.slice. Nov 8 00:21:22.639559 kubelet[2501]: I1108 00:21:22.639524 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/747a1d48-6b9b-4ad4-aae6-7e918f295a7f-calico-apiserver-certs\") pod \"calico-apiserver-596fc4bd76-8l5hn\" (UID: \"747a1d48-6b9b-4ad4-aae6-7e918f295a7f\") " pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" Nov 8 00:21:22.639559 kubelet[2501]: I1108 00:21:22.639563 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/329e0556-11fe-424d-9621-4c503891f4c4-goldmane-ca-bundle\") pod \"goldmane-666569f655-fm88z\" (UID: \"329e0556-11fe-424d-9621-4c503891f4c4\") " pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:22.639815 kubelet[2501]: I1108 00:21:22.639582 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-backend-key-pair\") pod \"whisker-86784b5f66-2xvmd\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " pod="calico-system/whisker-86784b5f66-2xvmd" Nov 8 00:21:22.639815 kubelet[2501]: I1108 00:21:22.639600 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9kgv\" (UniqueName: \"kubernetes.io/projected/9b72e8b4-2554-45c8-82a8-87c096020fee-kube-api-access-m9kgv\") pod \"calico-apiserver-596fc4bd76-bh4h9\" (UID: \"9b72e8b4-2554-45c8-82a8-87c096020fee\") " pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" Nov 8 00:21:22.639815 kubelet[2501]: I1108 00:21:22.639618 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf32dec8-2497-4f2e-91ee-003d9b7723b4-config-volume\") pod \"coredns-674b8bbfcf-c86nr\" (UID: \"bf32dec8-2497-4f2e-91ee-003d9b7723b4\") " pod="kube-system/coredns-674b8bbfcf-c86nr" Nov 8 00:21:22.639815 kubelet[2501]: I1108 00:21:22.639634 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zhw\" (UniqueName: \"kubernetes.io/projected/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-kube-api-access-t7zhw\") pod \"whisker-86784b5f66-2xvmd\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " pod="calico-system/whisker-86784b5f66-2xvmd" Nov 8 00:21:22.639815 kubelet[2501]: I1108 00:21:22.639652 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwdzv\" (UniqueName: \"kubernetes.io/projected/10d0265b-9d98-419e-98b8-ef3078177b60-kube-api-access-cwdzv\") pod \"calico-kube-controllers-bc684977b-dwfpx\" (UID: \"10d0265b-9d98-419e-98b8-ef3078177b60\") " pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" Nov 8 00:21:22.640296 kubelet[2501]: I1108 00:21:22.639672 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ghk\" (UniqueName: \"kubernetes.io/projected/bf32dec8-2497-4f2e-91ee-003d9b7723b4-kube-api-access-r8ghk\") pod \"coredns-674b8bbfcf-c86nr\" (UID: \"bf32dec8-2497-4f2e-91ee-003d9b7723b4\") " pod="kube-system/coredns-674b8bbfcf-c86nr" Nov 8 00:21:22.640296 kubelet[2501]: I1108 00:21:22.639689 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/329e0556-11fe-424d-9621-4c503891f4c4-config\") pod \"goldmane-666569f655-fm88z\" (UID: \"329e0556-11fe-424d-9621-4c503891f4c4\") " pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:22.640296 kubelet[2501]: I1108 00:21:22.639707 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/329e0556-11fe-424d-9621-4c503891f4c4-goldmane-key-pair\") pod \"goldmane-666569f655-fm88z\" (UID: \"329e0556-11fe-424d-9621-4c503891f4c4\") " pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:22.640296 kubelet[2501]: I1108 00:21:22.639721 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-ca-bundle\") pod \"whisker-86784b5f66-2xvmd\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " pod="calico-system/whisker-86784b5f66-2xvmd" Nov 8 00:21:22.640296 kubelet[2501]: I1108 00:21:22.639767 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w62mc\" (UniqueName: \"kubernetes.io/projected/747a1d48-6b9b-4ad4-aae6-7e918f295a7f-kube-api-access-w62mc\") pod \"calico-apiserver-596fc4bd76-8l5hn\" (UID: \"747a1d48-6b9b-4ad4-aae6-7e918f295a7f\") " pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" Nov 8 00:21:22.640453 kubelet[2501]: I1108 00:21:22.639784 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvx9\" (UniqueName: \"kubernetes.io/projected/329e0556-11fe-424d-9621-4c503891f4c4-kube-api-access-dzvx9\") pod \"goldmane-666569f655-fm88z\" (UID: \"329e0556-11fe-424d-9621-4c503891f4c4\") " pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:22.640453 kubelet[2501]: I1108 00:21:22.639801 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10d0265b-9d98-419e-98b8-ef3078177b60-tigera-ca-bundle\") pod \"calico-kube-controllers-bc684977b-dwfpx\" (UID: \"10d0265b-9d98-419e-98b8-ef3078177b60\") " pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" Nov 8 00:21:22.640453 kubelet[2501]: I1108 00:21:22.639831 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25d1e64e-eeda-401a-ad8a-78903d2ff60f-config-volume\") pod \"coredns-674b8bbfcf-h5wgp\" (UID: \"25d1e64e-eeda-401a-ad8a-78903d2ff60f\") " pod="kube-system/coredns-674b8bbfcf-h5wgp" Nov 8 00:21:22.640453 kubelet[2501]: I1108 00:21:22.639847 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mzx6\" (UniqueName: \"kubernetes.io/projected/25d1e64e-eeda-401a-ad8a-78903d2ff60f-kube-api-access-4mzx6\") pod \"coredns-674b8bbfcf-h5wgp\" (UID: \"25d1e64e-eeda-401a-ad8a-78903d2ff60f\") " pod="kube-system/coredns-674b8bbfcf-h5wgp" Nov 8 00:21:22.640453 kubelet[2501]: I1108 00:21:22.639864 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b72e8b4-2554-45c8-82a8-87c096020fee-calico-apiserver-certs\") pod \"calico-apiserver-596fc4bd76-bh4h9\" (UID: \"9b72e8b4-2554-45c8-82a8-87c096020fee\") " pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" Nov 8 00:21:22.642937 systemd[1]: Created slice kubepods-besteffort-pod747a1d48_6b9b_4ad4_aae6_7e918f295a7f.slice - libcontainer container kubepods-besteffort-pod747a1d48_6b9b_4ad4_aae6_7e918f295a7f.slice. Nov 8 00:21:22.889063 kubelet[2501]: E1108 00:21:22.888107 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:22.890613 containerd[1463]: time="2025-11-08T00:21:22.890569379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c86nr,Uid:bf32dec8-2497-4f2e-91ee-003d9b7723b4,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:22.904726 containerd[1463]: time="2025-11-08T00:21:22.903608488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc684977b-dwfpx,Uid:10d0265b-9d98-419e-98b8-ef3078177b60,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:22.913746 kubelet[2501]: E1108 00:21:22.913175 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:22.915921 containerd[1463]: time="2025-11-08T00:21:22.914947043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5wgp,Uid:25d1e64e-eeda-401a-ad8a-78903d2ff60f,Namespace:kube-system,Attempt:0,}" Nov 8 00:21:22.924261 containerd[1463]: time="2025-11-08T00:21:22.923864649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86784b5f66-2xvmd,Uid:117b536c-81bf-4f2f-9f1f-8a64cd38e25c,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:22.956287 containerd[1463]: time="2025-11-08T00:21:22.956134263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-8l5hn,Uid:747a1d48-6b9b-4ad4-aae6-7e918f295a7f,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:22.961640 containerd[1463]: time="2025-11-08T00:21:22.956601631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fm88z,Uid:329e0556-11fe-424d-9621-4c503891f4c4,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:22.961992 containerd[1463]: time="2025-11-08T00:21:22.956639502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-bh4h9,Uid:9b72e8b4-2554-45c8-82a8-87c096020fee,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:21:23.298167 containerd[1463]: time="2025-11-08T00:21:23.297697034Z" level=error msg="Failed to destroy network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.311098 containerd[1463]: time="2025-11-08T00:21:23.311041905Z" level=error msg="Failed to destroy network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.314316 containerd[1463]: time="2025-11-08T00:21:23.314258514Z" level=error msg="encountered an error cleaning up failed sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.314533 containerd[1463]: time="2025-11-08T00:21:23.314509155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fm88z,Uid:329e0556-11fe-424d-9621-4c503891f4c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.314996 containerd[1463]: time="2025-11-08T00:21:23.314945859Z" level=error msg="Failed to destroy network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.319337 containerd[1463]: time="2025-11-08T00:21:23.318277051Z" level=error msg="encountered an error cleaning up failed sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.319500 containerd[1463]: time="2025-11-08T00:21:23.319349622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86784b5f66-2xvmd,Uid:117b536c-81bf-4f2f-9f1f-8a64cd38e25c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.328331 containerd[1463]: time="2025-11-08T00:21:23.327965542Z" level=error msg="encountered an error cleaning up failed sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.328487 kubelet[2501]: E1108 00:21:23.328178 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.328487 kubelet[2501]: E1108 00:21:23.328268 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86784b5f66-2xvmd" Nov 8 00:21:23.328487 kubelet[2501]: E1108 00:21:23.328181 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.328487 kubelet[2501]: E1108 00:21:23.328367 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:23.328659 kubelet[2501]: E1108 00:21:23.328414 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-fm88z" Nov 8 00:21:23.328659 kubelet[2501]: E1108 00:21:23.328492 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-fm88z_calico-system(329e0556-11fe-424d-9621-4c503891f4c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-fm88z_calico-system(329e0556-11fe-424d-9621-4c503891f4c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:21:23.329736 kubelet[2501]: E1108 00:21:23.328296 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-86784b5f66-2xvmd" Nov 8 00:21:23.329736 kubelet[2501]: E1108 00:21:23.328890 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-86784b5f66-2xvmd_calico-system(117b536c-81bf-4f2f-9f1f-8a64cd38e25c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-86784b5f66-2xvmd_calico-system(117b536c-81bf-4f2f-9f1f-8a64cd38e25c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86784b5f66-2xvmd" podUID="117b536c-81bf-4f2f-9f1f-8a64cd38e25c" Nov 8 00:21:23.329905 containerd[1463]: time="2025-11-08T00:21:23.328978950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc684977b-dwfpx,Uid:10d0265b-9d98-419e-98b8-ef3078177b60,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.331047 kubelet[2501]: E1108 00:21:23.330573 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.331047 kubelet[2501]: E1108 00:21:23.330628 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" Nov 8 00:21:23.331047 kubelet[2501]: E1108 00:21:23.330650 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" Nov 8 00:21:23.331249 kubelet[2501]: E1108 00:21:23.330701 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bc684977b-dwfpx_calico-system(10d0265b-9d98-419e-98b8-ef3078177b60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bc684977b-dwfpx_calico-system(10d0265b-9d98-419e-98b8-ef3078177b60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:23.341024 containerd[1463]: time="2025-11-08T00:21:23.340725280Z" level=error msg="Failed to destroy network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.341324 containerd[1463]: time="2025-11-08T00:21:23.341200557Z" level=error msg="encountered an error cleaning up failed sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.341324 containerd[1463]: time="2025-11-08T00:21:23.341284339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-bh4h9,Uid:9b72e8b4-2554-45c8-82a8-87c096020fee,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.341852 kubelet[2501]: E1108 00:21:23.341592 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.341852 kubelet[2501]: E1108 00:21:23.341673 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" Nov 8 00:21:23.341852 kubelet[2501]: E1108 00:21:23.341722 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" Nov 8 00:21:23.343018 kubelet[2501]: E1108 00:21:23.341801 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-596fc4bd76-bh4h9_calico-apiserver(9b72e8b4-2554-45c8-82a8-87c096020fee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-596fc4bd76-bh4h9_calico-apiserver(9b72e8b4-2554-45c8-82a8-87c096020fee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:23.347860 containerd[1463]: time="2025-11-08T00:21:23.347760098Z" level=error msg="Failed to destroy network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.354562 containerd[1463]: time="2025-11-08T00:21:23.353123112Z" level=error msg="encountered an error cleaning up failed sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.354562 containerd[1463]: time="2025-11-08T00:21:23.353510377Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5wgp,Uid:25d1e64e-eeda-401a-ad8a-78903d2ff60f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.356097 kubelet[2501]: E1108 00:21:23.353922 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.356097 kubelet[2501]: E1108 00:21:23.353980 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h5wgp" Nov 8 00:21:23.356097 kubelet[2501]: E1108 00:21:23.354001 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h5wgp" Nov 8 00:21:23.356293 kubelet[2501]: E1108 00:21:23.354056 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h5wgp_kube-system(25d1e64e-eeda-401a-ad8a-78903d2ff60f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h5wgp_kube-system(25d1e64e-eeda-401a-ad8a-78903d2ff60f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h5wgp" podUID="25d1e64e-eeda-401a-ad8a-78903d2ff60f" Nov 8 00:21:23.358606 containerd[1463]: time="2025-11-08T00:21:23.358160938Z" level=error msg="Failed to destroy network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.359341 containerd[1463]: time="2025-11-08T00:21:23.359180612Z" level=error msg="encountered an error cleaning up failed sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.359341 containerd[1463]: time="2025-11-08T00:21:23.359260854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c86nr,Uid:bf32dec8-2497-4f2e-91ee-003d9b7723b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.361398 kubelet[2501]: E1108 00:21:23.360945 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.361398 kubelet[2501]: E1108 00:21:23.361020 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c86nr" Nov 8 00:21:23.361398 kubelet[2501]: E1108 00:21:23.361040 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c86nr" Nov 8 00:21:23.361754 kubelet[2501]: E1108 00:21:23.361371 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c86nr_kube-system(bf32dec8-2497-4f2e-91ee-003d9b7723b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c86nr_kube-system(bf32dec8-2497-4f2e-91ee-003d9b7723b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c86nr" podUID="bf32dec8-2497-4f2e-91ee-003d9b7723b4" Nov 8 00:21:23.367077 containerd[1463]: time="2025-11-08T00:21:23.366968400Z" level=error msg="Failed to destroy network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.367408 containerd[1463]: time="2025-11-08T00:21:23.367370539Z" level=error msg="encountered an error cleaning up failed sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.367450 containerd[1463]: time="2025-11-08T00:21:23.367432118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-8l5hn,Uid:747a1d48-6b9b-4ad4-aae6-7e918f295a7f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.368985 kubelet[2501]: E1108 00:21:23.367705 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.368985 kubelet[2501]: E1108 00:21:23.367786 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" Nov 8 00:21:23.368985 kubelet[2501]: E1108 00:21:23.367810 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" Nov 8 00:21:23.369247 kubelet[2501]: E1108 00:21:23.367892 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-596fc4bd76-8l5hn_calico-apiserver(747a1d48-6b9b-4ad4-aae6-7e918f295a7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-596fc4bd76-8l5hn_calico-apiserver(747a1d48-6b9b-4ad4-aae6-7e918f295a7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:23.379411 systemd[1]: Created slice kubepods-besteffort-pod71de4983_7c24_4272_8fa7_0a4b5407d2c0.slice - libcontainer container kubepods-besteffort-pod71de4983_7c24_4272_8fa7_0a4b5407d2c0.slice. Nov 8 00:21:23.386628 containerd[1463]: time="2025-11-08T00:21:23.386573734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldqsl,Uid:71de4983-7c24-4272-8fa7-0a4b5407d2c0,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:23.460570 containerd[1463]: time="2025-11-08T00:21:23.460515874Z" level=error msg="Failed to destroy network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.460944 containerd[1463]: time="2025-11-08T00:21:23.460914510Z" level=error msg="encountered an error cleaning up failed sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.461010 containerd[1463]: time="2025-11-08T00:21:23.460981459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldqsl,Uid:71de4983-7c24-4272-8fa7-0a4b5407d2c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.461323 kubelet[2501]: E1108 00:21:23.461288 2501 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.461391 kubelet[2501]: E1108 00:21:23.461353 2501 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:23.461391 kubelet[2501]: E1108 00:21:23.461374 2501 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ldqsl" Nov 8 00:21:23.461446 kubelet[2501]: E1108 00:21:23.461428 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:23.528757 kubelet[2501]: I1108 00:21:23.528709 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:23.531729 kubelet[2501]: I1108 00:21:23.531698 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:23.583350 containerd[1463]: time="2025-11-08T00:21:23.580938566Z" level=info msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" Nov 8 00:21:23.583350 containerd[1463]: time="2025-11-08T00:21:23.582902254Z" level=info msg="Ensure that sandbox 11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867 in task-service has been cleanup successfully" Nov 8 00:21:23.583350 containerd[1463]: time="2025-11-08T00:21:23.583216487Z" level=info msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" Nov 8 00:21:23.583613 containerd[1463]: time="2025-11-08T00:21:23.583391437Z" level=info msg="Ensure that sandbox e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45 in task-service has been cleanup successfully" Nov 8 00:21:23.587663 kubelet[2501]: I1108 00:21:23.587629 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:23.593442 containerd[1463]: time="2025-11-08T00:21:23.593392581Z" level=info msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" Nov 8 00:21:23.593649 containerd[1463]: time="2025-11-08T00:21:23.593631931Z" level=info msg="Ensure that sandbox d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c in task-service has been cleanup successfully" Nov 8 00:21:23.598932 kubelet[2501]: I1108 00:21:23.598275 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:23.599259 containerd[1463]: time="2025-11-08T00:21:23.599224117Z" level=info msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" Nov 8 00:21:23.599421 containerd[1463]: time="2025-11-08T00:21:23.599402081Z" level=info msg="Ensure that sandbox 3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05 in task-service has been cleanup successfully" Nov 8 00:21:23.603890 kubelet[2501]: I1108 00:21:23.603789 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:23.607365 containerd[1463]: time="2025-11-08T00:21:23.607284661Z" level=info msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" Nov 8 00:21:23.609232 containerd[1463]: time="2025-11-08T00:21:23.609082111Z" level=info msg="Ensure that sandbox 37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891 in task-service has been cleanup successfully" Nov 8 00:21:23.609982 kubelet[2501]: I1108 00:21:23.609946 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:23.617101 containerd[1463]: time="2025-11-08T00:21:23.617014899Z" level=info msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" Nov 8 00:21:23.618911 containerd[1463]: time="2025-11-08T00:21:23.618691712Z" level=info msg="Ensure that sandbox 2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d in task-service has been cleanup successfully" Nov 8 00:21:23.636732 kubelet[2501]: I1108 00:21:23.636687 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:23.652023 containerd[1463]: time="2025-11-08T00:21:23.651958925Z" level=info msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" Nov 8 00:21:23.652263 containerd[1463]: time="2025-11-08T00:21:23.652225253Z" level=info msg="Ensure that sandbox f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776 in task-service has been cleanup successfully" Nov 8 00:21:23.660034 kubelet[2501]: I1108 00:21:23.659999 2501 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:23.663419 containerd[1463]: time="2025-11-08T00:21:23.663329240Z" level=info msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" Nov 8 00:21:23.671691 containerd[1463]: time="2025-11-08T00:21:23.670228275Z" level=info msg="Ensure that sandbox 5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435 in task-service has been cleanup successfully" Nov 8 00:21:23.748078 containerd[1463]: time="2025-11-08T00:21:23.748010956Z" level=error msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" failed" error="failed to destroy network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.748725 kubelet[2501]: E1108 00:21:23.748674 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:23.756707 kubelet[2501]: E1108 00:21:23.748749 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05"} Nov 8 00:21:23.756903 kubelet[2501]: E1108 00:21:23.756733 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10d0265b-9d98-419e-98b8-ef3078177b60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.756903 kubelet[2501]: E1108 00:21:23.756764 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10d0265b-9d98-419e-98b8-ef3078177b60\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:23.783884 containerd[1463]: time="2025-11-08T00:21:23.783813793Z" level=error msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" failed" error="failed to destroy network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.784352 kubelet[2501]: E1108 00:21:23.784303 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:23.784500 kubelet[2501]: E1108 00:21:23.784370 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45"} Nov 8 00:21:23.784500 kubelet[2501]: E1108 00:21:23.784468 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"747a1d48-6b9b-4ad4-aae6-7e918f295a7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.784699 kubelet[2501]: E1108 00:21:23.784498 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"747a1d48-6b9b-4ad4-aae6-7e918f295a7f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:23.811453 containerd[1463]: time="2025-11-08T00:21:23.811347996Z" level=error msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" failed" error="failed to destroy network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.811797 kubelet[2501]: E1108 00:21:23.811734 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:23.811934 kubelet[2501]: E1108 00:21:23.811820 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d"} Nov 8 00:21:23.811934 kubelet[2501]: E1108 00:21:23.811889 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.812089 kubelet[2501]: E1108 00:21:23.811931 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71de4983-7c24-4272-8fa7-0a4b5407d2c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:23.812303 containerd[1463]: time="2025-11-08T00:21:23.811768077Z" level=error msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" failed" error="failed to destroy network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.812909 kubelet[2501]: E1108 00:21:23.812515 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:23.812909 kubelet[2501]: E1108 00:21:23.812589 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867"} Nov 8 00:21:23.812909 kubelet[2501]: E1108 00:21:23.812636 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b72e8b4-2554-45c8-82a8-87c096020fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.812909 kubelet[2501]: E1108 00:21:23.812677 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b72e8b4-2554-45c8-82a8-87c096020fee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:23.832116 containerd[1463]: time="2025-11-08T00:21:23.832055797Z" level=error msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" failed" error="failed to destroy network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.832310 containerd[1463]: time="2025-11-08T00:21:23.832187347Z" level=error msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" failed" error="failed to destroy network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.832694 kubelet[2501]: E1108 00:21:23.832374 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:23.832694 kubelet[2501]: E1108 00:21:23.832434 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c"} Nov 8 00:21:23.832694 kubelet[2501]: E1108 00:21:23.832476 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"329e0556-11fe-424d-9621-4c503891f4c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.832694 kubelet[2501]: E1108 00:21:23.832498 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"329e0556-11fe-424d-9621-4c503891f4c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:21:23.833205 kubelet[2501]: E1108 00:21:23.832530 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:23.833205 kubelet[2501]: E1108 00:21:23.832543 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776"} Nov 8 00:21:23.833205 kubelet[2501]: E1108 00:21:23.832560 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.833205 kubelet[2501]: E1108 00:21:23.832575 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-86784b5f66-2xvmd" podUID="117b536c-81bf-4f2f-9f1f-8a64cd38e25c" Nov 8 00:21:23.848551 containerd[1463]: time="2025-11-08T00:21:23.846345854Z" level=error msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" failed" error="failed to destroy network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.848551 containerd[1463]: time="2025-11-08T00:21:23.848416640Z" level=error msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" failed" error="failed to destroy network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:21:23.848853 kubelet[2501]: E1108 00:21:23.846682 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:23.848853 kubelet[2501]: E1108 00:21:23.846747 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435"} Nov 8 00:21:23.848853 kubelet[2501]: E1108 00:21:23.847001 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"25d1e64e-eeda-401a-ad8a-78903d2ff60f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.848853 kubelet[2501]: E1108 00:21:23.847047 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"25d1e64e-eeda-401a-ad8a-78903d2ff60f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h5wgp" podUID="25d1e64e-eeda-401a-ad8a-78903d2ff60f" Nov 8 00:21:23.849298 kubelet[2501]: E1108 00:21:23.848642 2501 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:23.849298 kubelet[2501]: E1108 00:21:23.848679 2501 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891"} Nov 8 00:21:23.849298 kubelet[2501]: E1108 00:21:23.848710 2501 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf32dec8-2497-4f2e-91ee-003d9b7723b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:21:23.849298 kubelet[2501]: E1108 00:21:23.848738 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf32dec8-2497-4f2e-91ee-003d9b7723b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c86nr" podUID="bf32dec8-2497-4f2e-91ee-003d9b7723b4" Nov 8 00:21:24.737243 kubelet[2501]: I1108 00:21:24.737186 2501 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:21:24.740731 kubelet[2501]: E1108 00:21:24.738840 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:25.665494 kubelet[2501]: E1108 00:21:25.665453 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:28.593365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2224626870.mount: Deactivated successfully. Nov 8 00:21:28.672522 containerd[1463]: time="2025-11-08T00:21:28.672318496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.672522 containerd[1463]: time="2025-11-08T00:21:28.651062505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:21:28.698927 containerd[1463]: time="2025-11-08T00:21:28.698180684Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.699225 containerd[1463]: time="2025-11-08T00:21:28.699188047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:21:28.699843 containerd[1463]: time="2025-11-08T00:21:28.699816354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.167775853s" Nov 8 00:21:28.699897 containerd[1463]: time="2025-11-08T00:21:28.699850186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:21:28.760906 containerd[1463]: time="2025-11-08T00:21:28.760746316Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:21:28.812170 containerd[1463]: time="2025-11-08T00:21:28.812128405Z" level=info msg="CreateContainer within sandbox \"1e25e415a3c670d93a8f524c3c2857f860de6898cd304891cd90a0d865c6a684\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"95f8b37a3dad95492797f41553892efbb2018c7c2331b59b8ce2e7a1f31535a2\"" Nov 8 00:21:28.826859 containerd[1463]: time="2025-11-08T00:21:28.826802641Z" level=info msg="StartContainer for \"95f8b37a3dad95492797f41553892efbb2018c7c2331b59b8ce2e7a1f31535a2\"" Nov 8 00:21:28.916312 systemd[1]: Started cri-containerd-95f8b37a3dad95492797f41553892efbb2018c7c2331b59b8ce2e7a1f31535a2.scope - libcontainer container 95f8b37a3dad95492797f41553892efbb2018c7c2331b59b8ce2e7a1f31535a2. Nov 8 00:21:28.965996 containerd[1463]: time="2025-11-08T00:21:28.965489691Z" level=info msg="StartContainer for \"95f8b37a3dad95492797f41553892efbb2018c7c2331b59b8ce2e7a1f31535a2\" returns successfully" Nov 8 00:21:29.175420 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:21:29.176745 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:21:29.475893 containerd[1463]: time="2025-11-08T00:21:29.475681770Z" level=info msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" Nov 8 00:21:29.701915 kubelet[2501]: E1108 00:21:29.699476 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:29.800365 kubelet[2501]: I1108 00:21:29.789518 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f8dzs" podStartSLOduration=2.413836324 podStartE2EDuration="16.753106451s" podCreationTimestamp="2025-11-08 00:21:13 +0000 UTC" firstStartedPulling="2025-11-08 00:21:14.367795842 +0000 UTC m=+25.178358939" lastFinishedPulling="2025-11-08 00:21:28.707065984 +0000 UTC m=+39.517629066" observedRunningTime="2025-11-08 00:21:29.75026785 +0000 UTC m=+40.560830958" watchObservedRunningTime="2025-11-08 00:21:29.753106451 +0000 UTC m=+40.563669555" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.587 [INFO][3715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.612 [INFO][3715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" iface="eth0" netns="/var/run/netns/cni-cb2d2f82-3734-a4c1-03fe-bfcb2c8154e2" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.613 [INFO][3715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" iface="eth0" netns="/var/run/netns/cni-cb2d2f82-3734-a4c1-03fe-bfcb2c8154e2" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.617 [INFO][3715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" iface="eth0" netns="/var/run/netns/cni-cb2d2f82-3734-a4c1-03fe-bfcb2c8154e2" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.617 [INFO][3715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.617 [INFO][3715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.948 [INFO][3724] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.952 [INFO][3724] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.952 [INFO][3724] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.981 [WARNING][3724] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.981 [INFO][3724] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.986 [INFO][3724] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:29.993983 containerd[1463]: 2025-11-08 00:21:29.989 [INFO][3715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:29.999451 containerd[1463]: time="2025-11-08T00:21:29.994565686Z" level=info msg="TearDown network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" successfully" Nov 8 00:21:29.999451 containerd[1463]: time="2025-11-08T00:21:29.994606679Z" level=info msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" returns successfully" Nov 8 00:21:29.998072 systemd[1]: run-netns-cni\x2dcb2d2f82\x2d3734\x2da4c1\x2d03fe\x2dbfcb2c8154e2.mount: Deactivated successfully. Nov 8 00:21:30.132955 kubelet[2501]: I1108 00:21:30.132903 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-ca-bundle\") pod \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " Nov 8 00:21:30.132955 kubelet[2501]: I1108 00:21:30.132966 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7zhw\" (UniqueName: \"kubernetes.io/projected/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-kube-api-access-t7zhw\") pod \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " Nov 8 00:21:30.133486 kubelet[2501]: I1108 00:21:30.133005 2501 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-backend-key-pair\") pod \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\" (UID: \"117b536c-81bf-4f2f-9f1f-8a64cd38e25c\") " Nov 8 00:21:30.167269 kubelet[2501]: I1108 00:21:30.162654 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "117b536c-81bf-4f2f-9f1f-8a64cd38e25c" (UID: "117b536c-81bf-4f2f-9f1f-8a64cd38e25c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:21:30.187241 systemd[1]: var-lib-kubelet-pods-117b536c\x2d81bf\x2d4f2f\x2d9f1f\x2d8a64cd38e25c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7zhw.mount: Deactivated successfully. Nov 8 00:21:30.187615 systemd[1]: var-lib-kubelet-pods-117b536c\x2d81bf\x2d4f2f\x2d9f1f\x2d8a64cd38e25c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:21:30.190420 kubelet[2501]: I1108 00:21:30.189137 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-kube-api-access-t7zhw" (OuterVolumeSpecName: "kube-api-access-t7zhw") pod "117b536c-81bf-4f2f-9f1f-8a64cd38e25c" (UID: "117b536c-81bf-4f2f-9f1f-8a64cd38e25c"). InnerVolumeSpecName "kube-api-access-t7zhw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:21:30.190798 kubelet[2501]: I1108 00:21:30.190749 2501 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "117b536c-81bf-4f2f-9f1f-8a64cd38e25c" (UID: "117b536c-81bf-4f2f-9f1f-8a64cd38e25c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:21:30.233894 kubelet[2501]: I1108 00:21:30.233831 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-f4234a6c60\" DevicePath \"\"" Nov 8 00:21:30.233894 kubelet[2501]: I1108 00:21:30.233891 2501 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-whisker-ca-bundle\") on node \"ci-4081.3.6-n-f4234a6c60\" DevicePath \"\"" Nov 8 00:21:30.233894 kubelet[2501]: I1108 00:21:30.233909 2501 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7zhw\" (UniqueName: \"kubernetes.io/projected/117b536c-81bf-4f2f-9f1f-8a64cd38e25c-kube-api-access-t7zhw\") on node \"ci-4081.3.6-n-f4234a6c60\" DevicePath \"\"" Nov 8 00:21:30.702959 kubelet[2501]: E1108 00:21:30.701251 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:30.718330 systemd[1]: Removed slice kubepods-besteffort-pod117b536c_81bf_4f2f_9f1f_8a64cd38e25c.slice - libcontainer container kubepods-besteffort-pod117b536c_81bf_4f2f_9f1f_8a64cd38e25c.slice. Nov 8 00:21:30.856125 systemd[1]: Created slice kubepods-besteffort-pod0e8bedc1_e771_4bb5_bd8c_8fc39604616a.slice - libcontainer container kubepods-besteffort-pod0e8bedc1_e771_4bb5_bd8c_8fc39604616a.slice. Nov 8 00:21:30.940074 kubelet[2501]: I1108 00:21:30.940022 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0e8bedc1-e771-4bb5-bd8c-8fc39604616a-whisker-backend-key-pair\") pod \"whisker-76544c75c6-2xtdd\" (UID: \"0e8bedc1-e771-4bb5-bd8c-8fc39604616a\") " pod="calico-system/whisker-76544c75c6-2xtdd" Nov 8 00:21:30.940074 kubelet[2501]: I1108 00:21:30.940082 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e8bedc1-e771-4bb5-bd8c-8fc39604616a-whisker-ca-bundle\") pod \"whisker-76544c75c6-2xtdd\" (UID: \"0e8bedc1-e771-4bb5-bd8c-8fc39604616a\") " pod="calico-system/whisker-76544c75c6-2xtdd" Nov 8 00:21:30.940270 kubelet[2501]: I1108 00:21:30.940098 2501 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89wrs\" (UniqueName: \"kubernetes.io/projected/0e8bedc1-e771-4bb5-bd8c-8fc39604616a-kube-api-access-89wrs\") pod \"whisker-76544c75c6-2xtdd\" (UID: \"0e8bedc1-e771-4bb5-bd8c-8fc39604616a\") " pod="calico-system/whisker-76544c75c6-2xtdd" Nov 8 00:21:31.168983 containerd[1463]: time="2025-11-08T00:21:31.168929567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76544c75c6-2xtdd,Uid:0e8bedc1-e771-4bb5-bd8c-8fc39604616a,Namespace:calico-system,Attempt:0,}" Nov 8 00:21:31.378423 kubelet[2501]: I1108 00:21:31.378222 2501 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="117b536c-81bf-4f2f-9f1f-8a64cd38e25c" path="/var/lib/kubelet/pods/117b536c-81bf-4f2f-9f1f-8a64cd38e25c/volumes" Nov 8 00:21:31.569974 systemd-networkd[1367]: cali7d6a0853394: Link UP Nov 8 00:21:31.573392 systemd-networkd[1367]: cali7d6a0853394: Gained carrier Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.302 [INFO][3869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.323 [INFO][3869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0 whisker-76544c75c6- calico-system 0e8bedc1-e771-4bb5-bd8c-8fc39604616a 933 0 2025-11-08 00:21:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76544c75c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 whisker-76544c75c6-2xtdd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7d6a0853394 [] [] }} ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.324 [INFO][3869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.404 [INFO][3886] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" HandleID="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.404 [INFO][3886] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" HandleID="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032bbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"whisker-76544c75c6-2xtdd", "timestamp":"2025-11-08 00:21:31.404555943 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.404 [INFO][3886] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.404 [INFO][3886] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.404 [INFO][3886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.433 [INFO][3886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.453 [INFO][3886] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.464 [INFO][3886] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.468 [INFO][3886] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.471 [INFO][3886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.472 [INFO][3886] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.474 [INFO][3886] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63 Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.485 [INFO][3886] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.512 [INFO][3886] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.193/26] block=192.168.89.192/26 handle="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.512 [INFO][3886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.193/26] handle="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.512 [INFO][3886] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:31.657192 containerd[1463]: 2025-11-08 00:21:31.512 [INFO][3886] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.193/26] IPv6=[] ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" HandleID="k8s-pod-network.9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.527 [INFO][3869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0", GenerateName:"whisker-76544c75c6-", Namespace:"calico-system", SelfLink:"", UID:"0e8bedc1-e771-4bb5-bd8c-8fc39604616a", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76544c75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"whisker-76544c75c6-2xtdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d6a0853394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.527 [INFO][3869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.193/32] ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.527 [INFO][3869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7d6a0853394 ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.581 [INFO][3869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.591 [INFO][3869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0", GenerateName:"whisker-76544c75c6-", Namespace:"calico-system", SelfLink:"", UID:"0e8bedc1-e771-4bb5-bd8c-8fc39604616a", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76544c75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63", Pod:"whisker-76544c75c6-2xtdd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7d6a0853394", MAC:"b2:ff:c1:24:55:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:31.659995 containerd[1463]: 2025-11-08 00:21:31.648 [INFO][3869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63" Namespace="calico-system" Pod="whisker-76544c75c6-2xtdd" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--76544c75c6--2xtdd-eth0" Nov 8 00:21:31.694475 containerd[1463]: time="2025-11-08T00:21:31.693415038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:31.694475 containerd[1463]: time="2025-11-08T00:21:31.693490036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:31.694475 containerd[1463]: time="2025-11-08T00:21:31.693504743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:31.694475 containerd[1463]: time="2025-11-08T00:21:31.694054850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:31.733346 systemd[1]: Started cri-containerd-9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63.scope - libcontainer container 9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63. Nov 8 00:21:31.746344 kubelet[2501]: E1108 00:21:31.745786 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:31.995525 containerd[1463]: time="2025-11-08T00:21:31.995333877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76544c75c6-2xtdd,Uid:0e8bedc1-e771-4bb5-bd8c-8fc39604616a,Namespace:calico-system,Attempt:0,} returns sandbox id \"9e38131d2fd756cdda3bcdc74173992c76a0fc10d3ea56237f8161c6cb151b63\"" Nov 8 00:21:32.012370 containerd[1463]: time="2025-11-08T00:21:32.010230186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:32.186911 kernel: bpftool[3996]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:21:32.379286 containerd[1463]: time="2025-11-08T00:21:32.379034513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:32.386379 containerd[1463]: time="2025-11-08T00:21:32.380066093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:32.386379 containerd[1463]: time="2025-11-08T00:21:32.382565408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:32.393033 kubelet[2501]: E1108 00:21:32.390691 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:32.396107 kubelet[2501]: E1108 00:21:32.395862 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:32.401949 kubelet[2501]: E1108 00:21:32.401865 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6531dc8e84ac47b0b0ceb8f72d200569,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:32.404624 containerd[1463]: time="2025-11-08T00:21:32.404338379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:32.650193 systemd-networkd[1367]: vxlan.calico: Link UP Nov 8 00:21:32.650206 systemd-networkd[1367]: vxlan.calico: Gained carrier Nov 8 00:21:32.727646 containerd[1463]: time="2025-11-08T00:21:32.727390740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:32.729443 containerd[1463]: time="2025-11-08T00:21:32.728359207Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:32.729443 containerd[1463]: time="2025-11-08T00:21:32.728384958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:32.729597 kubelet[2501]: E1108 00:21:32.728598 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:32.729597 kubelet[2501]: E1108 00:21:32.728648 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:32.729702 kubelet[2501]: E1108 00:21:32.728780 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:32.729949 kubelet[2501]: E1108 00:21:32.729915 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:21:32.745265 kubelet[2501]: E1108 00:21:32.745199 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:21:32.767683 systemd-networkd[1367]: cali7d6a0853394: Gained IPv6LL Nov 8 00:21:33.740940 kubelet[2501]: E1108 00:21:33.740773 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:21:33.855274 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Nov 8 00:21:35.374490 containerd[1463]: time="2025-11-08T00:21:35.373951696Z" level=info msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" Nov 8 00:21:35.374490 containerd[1463]: time="2025-11-08T00:21:35.374090646Z" level=info msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.450 [INFO][4088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.450 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" iface="eth0" netns="/var/run/netns/cni-de20bd6c-f067-5c66-4ae9-801c9be63286" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.452 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" iface="eth0" netns="/var/run/netns/cni-de20bd6c-f067-5c66-4ae9-801c9be63286" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" iface="eth0" netns="/var/run/netns/cni-de20bd6c-f067-5c66-4ae9-801c9be63286" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.492 [INFO][4103] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.492 [INFO][4103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.492 [INFO][4103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.503 [WARNING][4103] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.503 [INFO][4103] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.506 [INFO][4103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:35.516592 containerd[1463]: 2025-11-08 00:21:35.511 [INFO][4088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:35.520292 containerd[1463]: time="2025-11-08T00:21:35.519821896Z" level=info msg="TearDown network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" successfully" Nov 8 00:21:35.520292 containerd[1463]: time="2025-11-08T00:21:35.519924825Z" level=info msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" returns successfully" Nov 8 00:21:35.523395 containerd[1463]: time="2025-11-08T00:21:35.521989927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-8l5hn,Uid:747a1d48-6b9b-4ad4-aae6-7e918f295a7f,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:35.523532 systemd[1]: run-netns-cni\x2dde20bd6c\x2df067\x2d5c66\x2d4ae9\x2d801c9be63286.mount: Deactivated successfully. Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.450 [INFO][4089] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.451 [INFO][4089] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" iface="eth0" netns="/var/run/netns/cni-6e0d4142-85dc-293a-635f-be951c41fcf2" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.451 [INFO][4089] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" iface="eth0" netns="/var/run/netns/cni-6e0d4142-85dc-293a-635f-be951c41fcf2" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4089] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" iface="eth0" netns="/var/run/netns/cni-6e0d4142-85dc-293a-635f-be951c41fcf2" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4089] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.453 [INFO][4089] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.502 [INFO][4102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.503 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.506 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.515 [WARNING][4102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.515 [INFO][4102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.521 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:35.531238 containerd[1463]: 2025-11-08 00:21:35.527 [INFO][4089] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:35.533242 containerd[1463]: time="2025-11-08T00:21:35.532825295Z" level=info msg="TearDown network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" successfully" Nov 8 00:21:35.533242 containerd[1463]: time="2025-11-08T00:21:35.532925966Z" level=info msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" returns successfully" Nov 8 00:21:35.535901 containerd[1463]: time="2025-11-08T00:21:35.534205684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-bh4h9,Uid:9b72e8b4-2554-45c8-82a8-87c096020fee,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:21:35.536289 systemd[1]: run-netns-cni\x2d6e0d4142\x2d85dc\x2d293a\x2d635f\x2dbe951c41fcf2.mount: Deactivated successfully. Nov 8 00:21:35.739824 systemd-networkd[1367]: cali5d01887687d: Link UP Nov 8 00:21:35.742788 systemd-networkd[1367]: cali5d01887687d: Gained carrier Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.620 [INFO][4115] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0 calico-apiserver-596fc4bd76- calico-apiserver 747a1d48-6b9b-4ad4-aae6-7e918f295a7f 968 0 2025-11-08 00:21:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:596fc4bd76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 calico-apiserver-596fc4bd76-8l5hn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d01887687d [] [] }} ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.620 [INFO][4115] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.673 [INFO][4138] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" HandleID="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.673 [INFO][4138] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" HandleID="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"calico-apiserver-596fc4bd76-8l5hn", "timestamp":"2025-11-08 00:21:35.673032914 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.673 [INFO][4138] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.673 [INFO][4138] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.673 [INFO][4138] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.687 [INFO][4138] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.694 [INFO][4138] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.700 [INFO][4138] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.702 [INFO][4138] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.705 [INFO][4138] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.705 [INFO][4138] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.707 [INFO][4138] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.715 [INFO][4138] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.729 [INFO][4138] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.194/26] block=192.168.89.192/26 handle="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.729 [INFO][4138] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.194/26] handle="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.730 [INFO][4138] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:35.772403 containerd[1463]: 2025-11-08 00:21:35.730 [INFO][4138] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.194/26] IPv6=[] ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" HandleID="k8s-pod-network.49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.735 [INFO][4115] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"747a1d48-6b9b-4ad4-aae6-7e918f295a7f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"calico-apiserver-596fc4bd76-8l5hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d01887687d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.735 [INFO][4115] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.194/32] ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.735 [INFO][4115] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d01887687d ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.744 [INFO][4115] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.745 [INFO][4115] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"747a1d48-6b9b-4ad4-aae6-7e918f295a7f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b", Pod:"calico-apiserver-596fc4bd76-8l5hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d01887687d", MAC:"de:8e:3b:b9:21:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:35.774282 containerd[1463]: 2025-11-08 00:21:35.768 [INFO][4115] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-8l5hn" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:35.812316 containerd[1463]: time="2025-11-08T00:21:35.812093108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:35.812316 containerd[1463]: time="2025-11-08T00:21:35.812161560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:35.812316 containerd[1463]: time="2025-11-08T00:21:35.812173153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:35.812316 containerd[1463]: time="2025-11-08T00:21:35.812261012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:35.846572 systemd[1]: Started cri-containerd-49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b.scope - libcontainer container 49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b. Nov 8 00:21:35.900972 systemd-networkd[1367]: cali06ef27cc135: Link UP Nov 8 00:21:35.904023 systemd-networkd[1367]: cali06ef27cc135: Gained carrier Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.625 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0 calico-apiserver-596fc4bd76- calico-apiserver 9b72e8b4-2554-45c8-82a8-87c096020fee 967 0 2025-11-08 00:21:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:596fc4bd76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 calico-apiserver-596fc4bd76-bh4h9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06ef27cc135 [] [] }} ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.628 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.691 [INFO][4143] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" HandleID="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.691 [INFO][4143] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" HandleID="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"calico-apiserver-596fc4bd76-bh4h9", "timestamp":"2025-11-08 00:21:35.691612085 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.691 [INFO][4143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.730 [INFO][4143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.730 [INFO][4143] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.788 [INFO][4143] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.808 [INFO][4143] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.822 [INFO][4143] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.829 [INFO][4143] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.836 [INFO][4143] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.838 [INFO][4143] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.849 [INFO][4143] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.860 [INFO][4143] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.878 [INFO][4143] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.195/26] block=192.168.89.192/26 handle="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.878 [INFO][4143] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.195/26] handle="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.878 [INFO][4143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:35.934533 containerd[1463]: 2025-11-08 00:21:35.878 [INFO][4143] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.195/26] IPv6=[] ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" HandleID="k8s-pod-network.5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.889 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b72e8b4-2554-45c8-82a8-87c096020fee", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"calico-apiserver-596fc4bd76-bh4h9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ef27cc135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.889 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.195/32] ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.889 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06ef27cc135 ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.904 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.906 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b72e8b4-2554-45c8-82a8-87c096020fee", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b", Pod:"calico-apiserver-596fc4bd76-bh4h9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ef27cc135", MAC:"42:b6:f8:77:ae:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:35.936237 containerd[1463]: 2025-11-08 00:21:35.928 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b" Namespace="calico-apiserver" Pod="calico-apiserver-596fc4bd76-bh4h9" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:35.985942 containerd[1463]: time="2025-11-08T00:21:35.985674134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-8l5hn,Uid:747a1d48-6b9b-4ad4-aae6-7e918f295a7f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b\"" Nov 8 00:21:35.990791 containerd[1463]: time="2025-11-08T00:21:35.990556755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:36.004536 containerd[1463]: time="2025-11-08T00:21:36.004103724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:36.004536 containerd[1463]: time="2025-11-08T00:21:36.004205577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:36.004536 containerd[1463]: time="2025-11-08T00:21:36.004228704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:36.004536 containerd[1463]: time="2025-11-08T00:21:36.004375445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:36.029129 systemd[1]: Started cri-containerd-5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b.scope - libcontainer container 5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b. Nov 8 00:21:36.087138 containerd[1463]: time="2025-11-08T00:21:36.086850752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-596fc4bd76-bh4h9,Uid:9b72e8b4-2554-45c8-82a8-87c096020fee,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b\"" Nov 8 00:21:36.341646 containerd[1463]: time="2025-11-08T00:21:36.341576695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:36.342601 containerd[1463]: time="2025-11-08T00:21:36.342517209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:36.343485 containerd[1463]: time="2025-11-08T00:21:36.342528858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:36.343613 kubelet[2501]: E1108 00:21:36.342975 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:36.343613 kubelet[2501]: E1108 00:21:36.343036 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:36.343613 kubelet[2501]: E1108 00:21:36.343376 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w62mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-8l5hn_calico-apiserver(747a1d48-6b9b-4ad4-aae6-7e918f295a7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:36.344698 kubelet[2501]: E1108 00:21:36.344650 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:36.345378 containerd[1463]: time="2025-11-08T00:21:36.345100446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:36.373376 containerd[1463]: time="2025-11-08T00:21:36.373320863Z" level=info msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.455 [INFO][4262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.458 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" iface="eth0" netns="/var/run/netns/cni-9ebef639-fbbd-7147-69eb-b870ad8fa810" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.458 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" iface="eth0" netns="/var/run/netns/cni-9ebef639-fbbd-7147-69eb-b870ad8fa810" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.458 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" iface="eth0" netns="/var/run/netns/cni-9ebef639-fbbd-7147-69eb-b870ad8fa810" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.458 [INFO][4262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.458 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.488 [INFO][4269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.489 [INFO][4269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.489 [INFO][4269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.497 [WARNING][4269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.497 [INFO][4269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.500 [INFO][4269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:36.504932 containerd[1463]: 2025-11-08 00:21:36.502 [INFO][4262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:36.504932 containerd[1463]: time="2025-11-08T00:21:36.504761991Z" level=info msg="TearDown network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" successfully" Nov 8 00:21:36.504932 containerd[1463]: time="2025-11-08T00:21:36.504818787Z" level=info msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" returns successfully" Nov 8 00:21:36.505959 containerd[1463]: time="2025-11-08T00:21:36.505902676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldqsl,Uid:71de4983-7c24-4272-8fa7-0a4b5407d2c0,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:36.525780 systemd[1]: run-netns-cni\x2d9ebef639\x2dfbbd\x2d7147\x2d69eb\x2db870ad8fa810.mount: Deactivated successfully. Nov 8 00:21:36.664165 systemd-networkd[1367]: calie5946011217: Link UP Nov 8 00:21:36.667831 systemd-networkd[1367]: calie5946011217: Gained carrier Nov 8 00:21:36.679599 containerd[1463]: time="2025-11-08T00:21:36.678107511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:36.685481 containerd[1463]: time="2025-11-08T00:21:36.685353852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:36.685922 containerd[1463]: time="2025-11-08T00:21:36.685489969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:36.685994 kubelet[2501]: E1108 00:21:36.685708 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:36.685994 kubelet[2501]: E1108 00:21:36.685770 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:36.687155 kubelet[2501]: E1108 00:21:36.685995 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9kgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-bh4h9_calico-apiserver(9b72e8b4-2554-45c8-82a8-87c096020fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:36.687636 kubelet[2501]: E1108 00:21:36.687449 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.562 [INFO][4276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0 csi-node-driver- calico-system 71de4983-7c24-4272-8fa7-0a4b5407d2c0 984 0 2025-11-08 00:21:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 csi-node-driver-ldqsl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie5946011217 [] [] }} ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.563 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.600 [INFO][4287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" HandleID="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.601 [INFO][4287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" HandleID="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"csi-node-driver-ldqsl", "timestamp":"2025-11-08 00:21:36.60091677 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.601 [INFO][4287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.601 [INFO][4287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.601 [INFO][4287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.609 [INFO][4287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.617 [INFO][4287] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.624 [INFO][4287] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.627 [INFO][4287] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.631 [INFO][4287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.632 [INFO][4287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.637 [INFO][4287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.642 [INFO][4287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.654 [INFO][4287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.196/26] block=192.168.89.192/26 handle="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.654 [INFO][4287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.196/26] handle="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.654 [INFO][4287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:36.698897 containerd[1463]: 2025-11-08 00:21:36.654 [INFO][4287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.196/26] IPv6=[] ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" HandleID="k8s-pod-network.48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.657 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71de4983-7c24-4272-8fa7-0a4b5407d2c0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"csi-node-driver-ldqsl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5946011217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.657 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.196/32] ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.657 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5946011217 ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.668 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.669 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71de4983-7c24-4272-8fa7-0a4b5407d2c0", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b", Pod:"csi-node-driver-ldqsl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5946011217", MAC:"4a:08:5b:07:e7:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:36.701304 containerd[1463]: 2025-11-08 00:21:36.694 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b" Namespace="calico-system" Pod="csi-node-driver-ldqsl" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:36.737934 containerd[1463]: time="2025-11-08T00:21:36.736182947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:36.737934 containerd[1463]: time="2025-11-08T00:21:36.736611262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:36.737934 containerd[1463]: time="2025-11-08T00:21:36.736679014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:36.738357 containerd[1463]: time="2025-11-08T00:21:36.738240234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:36.756771 kubelet[2501]: E1108 00:21:36.756705 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:36.769648 kubelet[2501]: E1108 00:21:36.769451 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:36.792832 systemd[1]: run-containerd-runc-k8s.io-48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b-runc.r2a4Hk.mount: Deactivated successfully. Nov 8 00:21:36.808203 systemd[1]: Started cri-containerd-48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b.scope - libcontainer container 48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b. Nov 8 00:21:36.903928 containerd[1463]: time="2025-11-08T00:21:36.903726886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ldqsl,Uid:71de4983-7c24-4272-8fa7-0a4b5407d2c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b\"" Nov 8 00:21:36.907700 containerd[1463]: time="2025-11-08T00:21:36.907629464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:37.253320 containerd[1463]: time="2025-11-08T00:21:37.253169421Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:37.254048 containerd[1463]: time="2025-11-08T00:21:37.254008026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:37.254171 containerd[1463]: time="2025-11-08T00:21:37.254103886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:37.254822 kubelet[2501]: E1108 00:21:37.254414 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:37.254822 kubelet[2501]: E1108 00:21:37.254489 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:37.254822 kubelet[2501]: E1108 00:21:37.254702 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:37.258530 containerd[1463]: time="2025-11-08T00:21:37.258491348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:37.374469 containerd[1463]: time="2025-11-08T00:21:37.374115296Z" level=info msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" Nov 8 00:21:37.375752 systemd-networkd[1367]: cali5d01887687d: Gained IPv6LL Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.450 [INFO][4355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.450 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" iface="eth0" netns="/var/run/netns/cni-fef18bc3-f4c8-1847-8fcb-5ca425545807" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.453 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" iface="eth0" netns="/var/run/netns/cni-fef18bc3-f4c8-1847-8fcb-5ca425545807" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.454 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" iface="eth0" netns="/var/run/netns/cni-fef18bc3-f4c8-1847-8fcb-5ca425545807" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.455 [INFO][4355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.455 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.490 [INFO][4362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.490 [INFO][4362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.490 [INFO][4362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.501 [WARNING][4362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.501 [INFO][4362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.504 [INFO][4362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:37.509522 containerd[1463]: 2025-11-08 00:21:37.506 [INFO][4355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:37.512926 containerd[1463]: time="2025-11-08T00:21:37.512462949Z" level=info msg="TearDown network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" successfully" Nov 8 00:21:37.512926 containerd[1463]: time="2025-11-08T00:21:37.512527628Z" level=info msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" returns successfully" Nov 8 00:21:37.514241 kubelet[2501]: E1108 00:21:37.514189 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:37.515432 systemd[1]: run-netns-cni\x2dfef18bc3\x2df4c8\x2d1847\x2d8fcb\x2d5ca425545807.mount: Deactivated successfully. Nov 8 00:21:37.517135 containerd[1463]: time="2025-11-08T00:21:37.516613177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5wgp,Uid:25d1e64e-eeda-401a-ad8a-78903d2ff60f,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:37.606951 containerd[1463]: time="2025-11-08T00:21:37.606289557Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:37.607291 containerd[1463]: time="2025-11-08T00:21:37.607227790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:37.607402 containerd[1463]: time="2025-11-08T00:21:37.607358498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:37.607655 kubelet[2501]: E1108 00:21:37.607618 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:37.607785 kubelet[2501]: E1108 00:21:37.607767 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:37.608089 kubelet[2501]: E1108 00:21:37.608044 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:37.609526 kubelet[2501]: E1108 00:21:37.609463 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:37.687578 systemd-networkd[1367]: calic814f348c90: Link UP Nov 8 00:21:37.687789 systemd-networkd[1367]: calic814f348c90: Gained carrier Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.585 [INFO][4369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0 coredns-674b8bbfcf- kube-system 25d1e64e-eeda-401a-ad8a-78903d2ff60f 1007 0 2025-11-08 00:20:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 coredns-674b8bbfcf-h5wgp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic814f348c90 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.585 [INFO][4369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.627 [INFO][4380] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" HandleID="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.627 [INFO][4380] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" HandleID="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"coredns-674b8bbfcf-h5wgp", "timestamp":"2025-11-08 00:21:37.627317127 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.627 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.627 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.627 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.638 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.646 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.653 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.656 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.660 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.660 [INFO][4380] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.663 [INFO][4380] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6 Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.671 [INFO][4380] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.680 [INFO][4380] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.197/26] block=192.168.89.192/26 handle="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.680 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.197/26] handle="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.680 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:37.712704 containerd[1463]: 2025-11-08 00:21:37.680 [INFO][4380] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.197/26] IPv6=[] ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" HandleID="k8s-pod-network.66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.683 [INFO][4369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25d1e64e-eeda-401a-ad8a-78903d2ff60f", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"coredns-674b8bbfcf-h5wgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic814f348c90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.683 [INFO][4369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.197/32] ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.683 [INFO][4369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic814f348c90 ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.686 [INFO][4369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.687 [INFO][4369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25d1e64e-eeda-401a-ad8a-78903d2ff60f", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6", Pod:"coredns-674b8bbfcf-h5wgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic814f348c90", MAC:"42:ef:25:e2:d9:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:37.717653 containerd[1463]: 2025-11-08 00:21:37.701 [INFO][4369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6" Namespace="kube-system" Pod="coredns-674b8bbfcf-h5wgp" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:37.746984 containerd[1463]: time="2025-11-08T00:21:37.746051586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:37.746984 containerd[1463]: time="2025-11-08T00:21:37.746860125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:37.746984 containerd[1463]: time="2025-11-08T00:21:37.746905004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.748512 containerd[1463]: time="2025-11-08T00:21:37.748345951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:37.778758 kubelet[2501]: E1108 00:21:37.778561 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:37.782813 systemd[1]: Started cri-containerd-66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6.scope - libcontainer container 66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6. Nov 8 00:21:37.794023 kubelet[2501]: E1108 00:21:37.793298 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:37.840200 kubelet[2501]: E1108 00:21:37.839571 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:37.893198 containerd[1463]: time="2025-11-08T00:21:37.893136325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h5wgp,Uid:25d1e64e-eeda-401a-ad8a-78903d2ff60f,Namespace:kube-system,Attempt:1,} returns sandbox id \"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6\"" Nov 8 00:21:37.894856 kubelet[2501]: E1108 00:21:37.894761 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:37.903437 containerd[1463]: time="2025-11-08T00:21:37.902787816Z" level=info msg="CreateContainer within sandbox \"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:37.931127 containerd[1463]: time="2025-11-08T00:21:37.931000087Z" level=info msg="CreateContainer within sandbox \"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"730e7826f64ae20a10c02bcdb24fd505d03098696676d805fc8f9f7274d9e5c4\"" Nov 8 00:21:37.932915 containerd[1463]: time="2025-11-08T00:21:37.932787870Z" level=info msg="StartContainer for \"730e7826f64ae20a10c02bcdb24fd505d03098696676d805fc8f9f7274d9e5c4\"" Nov 8 00:21:37.953273 systemd-networkd[1367]: cali06ef27cc135: Gained IPv6LL Nov 8 00:21:37.970262 systemd[1]: Started cri-containerd-730e7826f64ae20a10c02bcdb24fd505d03098696676d805fc8f9f7274d9e5c4.scope - libcontainer container 730e7826f64ae20a10c02bcdb24fd505d03098696676d805fc8f9f7274d9e5c4. Nov 8 00:21:38.015348 containerd[1463]: time="2025-11-08T00:21:38.015295853Z" level=info msg="StartContainer for \"730e7826f64ae20a10c02bcdb24fd505d03098696676d805fc8f9f7274d9e5c4\" returns successfully" Nov 8 00:21:38.373519 containerd[1463]: time="2025-11-08T00:21:38.373455450Z" level=info msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" Nov 8 00:21:38.374660 containerd[1463]: time="2025-11-08T00:21:38.373455724Z" level=info msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" Nov 8 00:21:38.529049 systemd-networkd[1367]: calie5946011217: Gained IPv6LL Nov 8 00:21:38.536628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2298517417.mount: Deactivated successfully. Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.487 [INFO][4495] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.487 [INFO][4495] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" iface="eth0" netns="/var/run/netns/cni-a2d06887-c220-b33c-46fc-a94af561167c" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.487 [INFO][4495] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" iface="eth0" netns="/var/run/netns/cni-a2d06887-c220-b33c-46fc-a94af561167c" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.488 [INFO][4495] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" iface="eth0" netns="/var/run/netns/cni-a2d06887-c220-b33c-46fc-a94af561167c" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.489 [INFO][4495] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.489 [INFO][4495] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.559 [INFO][4509] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.559 [INFO][4509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.559 [INFO][4509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.567 [WARNING][4509] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.567 [INFO][4509] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.571 [INFO][4509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:38.577094 containerd[1463]: 2025-11-08 00:21:38.574 [INFO][4495] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:38.580070 containerd[1463]: time="2025-11-08T00:21:38.580003711Z" level=info msg="TearDown network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" successfully" Nov 8 00:21:38.580070 containerd[1463]: time="2025-11-08T00:21:38.580069855Z" level=info msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" returns successfully" Nov 8 00:21:38.581402 systemd[1]: run-netns-cni\x2da2d06887\x2dc220\x2db33c\x2d46fc\x2da94af561167c.mount: Deactivated successfully. Nov 8 00:21:38.584984 containerd[1463]: time="2025-11-08T00:21:38.584928097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc684977b-dwfpx,Uid:10d0265b-9d98-419e-98b8-ef3078177b60,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.480 [INFO][4496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.483 [INFO][4496] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" iface="eth0" netns="/var/run/netns/cni-ed87a629-d1de-4446-323a-01be5501411e" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.483 [INFO][4496] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" iface="eth0" netns="/var/run/netns/cni-ed87a629-d1de-4446-323a-01be5501411e" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.486 [INFO][4496] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" iface="eth0" netns="/var/run/netns/cni-ed87a629-d1de-4446-323a-01be5501411e" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.488 [INFO][4496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.488 [INFO][4496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.566 [INFO][4510] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.567 [INFO][4510] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.572 [INFO][4510] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.588 [WARNING][4510] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.588 [INFO][4510] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.591 [INFO][4510] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:38.607948 containerd[1463]: 2025-11-08 00:21:38.598 [INFO][4496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:38.609325 containerd[1463]: time="2025-11-08T00:21:38.608770168Z" level=info msg="TearDown network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" successfully" Nov 8 00:21:38.609325 containerd[1463]: time="2025-11-08T00:21:38.608803345Z" level=info msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" returns successfully" Nov 8 00:21:38.613079 systemd[1]: run-netns-cni\x2ded87a629\x2dd1de\x2d4446\x2d323a\x2d01be5501411e.mount: Deactivated successfully. Nov 8 00:21:38.615680 containerd[1463]: time="2025-11-08T00:21:38.614804755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fm88z,Uid:329e0556-11fe-424d-9621-4c503891f4c4,Namespace:calico-system,Attempt:1,}" Nov 8 00:21:38.787709 kubelet[2501]: E1108 00:21:38.787454 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:38.794014 kubelet[2501]: E1108 00:21:38.793301 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:38.877302 systemd-networkd[1367]: califc797b3cc36: Link UP Nov 8 00:21:38.878766 systemd-networkd[1367]: califc797b3cc36: Gained carrier Nov 8 00:21:38.892977 kubelet[2501]: I1108 00:21:38.890481 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h5wgp" podStartSLOduration=42.890261511 podStartE2EDuration="42.890261511s" podCreationTimestamp="2025-11-08 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:38.855968743 +0000 UTC m=+49.666531849" watchObservedRunningTime="2025-11-08 00:21:38.890261511 +0000 UTC m=+49.700824618" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.694 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0 calico-kube-controllers-bc684977b- calico-system 10d0265b-9d98-419e-98b8-ef3078177b60 1035 0 2025-11-08 00:21:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bc684977b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 calico-kube-controllers-bc684977b-dwfpx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califc797b3cc36 [] [] }} ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.695 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.761 [INFO][4544] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" HandleID="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.761 [INFO][4544] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" HandleID="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"calico-kube-controllers-bc684977b-dwfpx", "timestamp":"2025-11-08 00:21:38.761062635 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.761 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.761 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.761 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.773 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.787 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.809 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.814 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.825 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.826 [INFO][4544] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.834 [INFO][4544] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208 Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.844 [INFO][4544] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.859 [INFO][4544] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.198/26] block=192.168.89.192/26 handle="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.859 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.198/26] handle="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.860 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:38.922500 containerd[1463]: 2025-11-08 00:21:38.860 [INFO][4544] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.198/26] IPv6=[] ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" HandleID="k8s-pod-network.3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.868 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0", GenerateName:"calico-kube-controllers-bc684977b-", Namespace:"calico-system", SelfLink:"", UID:"10d0265b-9d98-419e-98b8-ef3078177b60", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc684977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"calico-kube-controllers-bc684977b-dwfpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc797b3cc36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.868 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.198/32] ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.868 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc797b3cc36 ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.883 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.887 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0", GenerateName:"calico-kube-controllers-bc684977b-", Namespace:"calico-system", SelfLink:"", UID:"10d0265b-9d98-419e-98b8-ef3078177b60", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc684977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208", Pod:"calico-kube-controllers-bc684977b-dwfpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc797b3cc36", MAC:"f2:ea:8d:0d:52:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:38.926651 containerd[1463]: 2025-11-08 00:21:38.916 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208" Namespace="calico-system" Pod="calico-kube-controllers-bc684977b-dwfpx" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:38.966919 containerd[1463]: time="2025-11-08T00:21:38.966746191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:38.966919 containerd[1463]: time="2025-11-08T00:21:38.966828476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:38.966919 containerd[1463]: time="2025-11-08T00:21:38.966843136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:38.967866 containerd[1463]: time="2025-11-08T00:21:38.967787780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:39.012281 systemd-networkd[1367]: calif044e612c2c: Link UP Nov 8 00:21:39.015424 systemd-networkd[1367]: calif044e612c2c: Gained carrier Nov 8 00:21:39.021163 systemd[1]: Started cri-containerd-3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208.scope - libcontainer container 3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208. Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.746 [INFO][4530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0 goldmane-666569f655- calico-system 329e0556-11fe-424d-9621-4c503891f4c4 1034 0 2025-11-08 00:21:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 goldmane-666569f655-fm88z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif044e612c2c [] [] }} ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.746 [INFO][4530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.811 [INFO][4553] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" HandleID="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.812 [INFO][4553] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" HandleID="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"goldmane-666569f655-fm88z", "timestamp":"2025-11-08 00:21:38.811941206 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.812 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.860 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.860 [INFO][4553] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.893 [INFO][4553] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.920 [INFO][4553] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.939 [INFO][4553] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.945 [INFO][4553] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.951 [INFO][4553] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.951 [INFO][4553] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.957 [INFO][4553] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3 Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.968 [INFO][4553] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.992 [INFO][4553] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.199/26] block=192.168.89.192/26 handle="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.992 [INFO][4553] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.199/26] handle="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.992 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.055099 containerd[1463]: 2025-11-08 00:21:38.992 [INFO][4553] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.199/26] IPv6=[] ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" HandleID="k8s-pod-network.ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.003 [INFO][4530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"329e0556-11fe-424d-9621-4c503891f4c4", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"goldmane-666569f655-fm88z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif044e612c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.003 [INFO][4530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.199/32] ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.003 [INFO][4530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif044e612c2c ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.019 [INFO][4530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.021 [INFO][4530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"329e0556-11fe-424d-9621-4c503891f4c4", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3", Pod:"goldmane-666569f655-fm88z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif044e612c2c", MAC:"1e:61:a3:4c:f0:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.055710 containerd[1463]: 2025-11-08 00:21:39.049 [INFO][4530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3" Namespace="calico-system" Pod="goldmane-666569f655-fm88z" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:39.113054 containerd[1463]: time="2025-11-08T00:21:39.112321400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:39.113054 containerd[1463]: time="2025-11-08T00:21:39.112430877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:39.113054 containerd[1463]: time="2025-11-08T00:21:39.112452355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:39.113054 containerd[1463]: time="2025-11-08T00:21:39.112604977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:39.153510 systemd[1]: Started cri-containerd-ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3.scope - libcontainer container ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3. Nov 8 00:21:39.211542 containerd[1463]: time="2025-11-08T00:21:39.211409151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bc684977b-dwfpx,Uid:10d0265b-9d98-419e-98b8-ef3078177b60,Namespace:calico-system,Attempt:1,} returns sandbox id \"3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208\"" Nov 8 00:21:39.222271 containerd[1463]: time="2025-11-08T00:21:39.222131261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:39.304935 containerd[1463]: time="2025-11-08T00:21:39.304505772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-fm88z,Uid:329e0556-11fe-424d-9621-4c503891f4c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3\"" Nov 8 00:21:39.359220 systemd-networkd[1367]: calic814f348c90: Gained IPv6LL Nov 8 00:21:39.375301 containerd[1463]: time="2025-11-08T00:21:39.375242605Z" level=info msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.473 [INFO][4672] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.473 [INFO][4672] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" iface="eth0" netns="/var/run/netns/cni-7ac61b45-6225-f358-40ce-1ecfbc77c08b" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.474 [INFO][4672] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" iface="eth0" netns="/var/run/netns/cni-7ac61b45-6225-f358-40ce-1ecfbc77c08b" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.474 [INFO][4672] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" iface="eth0" netns="/var/run/netns/cni-7ac61b45-6225-f358-40ce-1ecfbc77c08b" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.474 [INFO][4672] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.474 [INFO][4672] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.509 [INFO][4679] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.509 [INFO][4679] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.510 [INFO][4679] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.520 [WARNING][4679] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.520 [INFO][4679] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.522 [INFO][4679] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.528646 containerd[1463]: 2025-11-08 00:21:39.525 [INFO][4672] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:39.529580 containerd[1463]: time="2025-11-08T00:21:39.528793434Z" level=info msg="TearDown network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" successfully" Nov 8 00:21:39.529580 containerd[1463]: time="2025-11-08T00:21:39.528829138Z" level=info msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" returns successfully" Nov 8 00:21:39.533549 kubelet[2501]: E1108 00:21:39.530411 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:39.533701 containerd[1463]: time="2025-11-08T00:21:39.533647690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c86nr,Uid:bf32dec8-2497-4f2e-91ee-003d9b7723b4,Namespace:kube-system,Attempt:1,}" Nov 8 00:21:39.542715 systemd[1]: run-netns-cni\x2d7ac61b45\x2d6225\x2df358\x2d40ce\x2d1ecfbc77c08b.mount: Deactivated successfully. Nov 8 00:21:39.572614 containerd[1463]: time="2025-11-08T00:21:39.571211535Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:39.572614 containerd[1463]: time="2025-11-08T00:21:39.572026017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:39.572614 containerd[1463]: time="2025-11-08T00:21:39.572130049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:39.572818 kubelet[2501]: E1108 00:21:39.572341 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:39.572818 kubelet[2501]: E1108 00:21:39.572390 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:39.572818 kubelet[2501]: E1108 00:21:39.572607 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwdzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bc684977b-dwfpx_calico-system(10d0265b-9d98-419e-98b8-ef3078177b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:39.573462 containerd[1463]: time="2025-11-08T00:21:39.573206388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:39.575568 kubelet[2501]: E1108 00:21:39.574677 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:39.752236 systemd-networkd[1367]: calib87103d6f71: Link UP Nov 8 00:21:39.755635 systemd-networkd[1367]: calib87103d6f71: Gained carrier Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.628 [INFO][4686] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0 coredns-674b8bbfcf- kube-system bf32dec8-2497-4f2e-91ee-003d9b7723b4 1060 0 2025-11-08 00:20:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f4234a6c60 coredns-674b8bbfcf-c86nr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib87103d6f71 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.629 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.673 [INFO][4697] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" HandleID="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.674 [INFO][4697] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" HandleID="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f4234a6c60", "pod":"coredns-674b8bbfcf-c86nr", "timestamp":"2025-11-08 00:21:39.673798304 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f4234a6c60", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.674 [INFO][4697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.674 [INFO][4697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.674 [INFO][4697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f4234a6c60' Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.682 [INFO][4697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.692 [INFO][4697] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.699 [INFO][4697] ipam/ipam.go 511: Trying affinity for 192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.703 [INFO][4697] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.708 [INFO][4697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.192/26 host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.708 [INFO][4697] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.192/26 handle="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.713 [INFO][4697] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.722 [INFO][4697] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.192/26 handle="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.739 [INFO][4697] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.200/26] block=192.168.89.192/26 handle="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.740 [INFO][4697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.200/26] handle="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" host="ci-4081.3.6-n-f4234a6c60" Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.740 [INFO][4697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:39.792403 containerd[1463]: 2025-11-08 00:21:39.740 [INFO][4697] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.200/26] IPv6=[] ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" HandleID="k8s-pod-network.9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.744 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf32dec8-2497-4f2e-91ee-003d9b7723b4", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"", Pod:"coredns-674b8bbfcf-c86nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib87103d6f71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.744 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.200/32] ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.744 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib87103d6f71 ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.756 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.758 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf32dec8-2497-4f2e-91ee-003d9b7723b4", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b", Pod:"coredns-674b8bbfcf-c86nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib87103d6f71", MAC:"66:c3:d8:01:af:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:39.793845 containerd[1463]: 2025-11-08 00:21:39.782 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b" Namespace="kube-system" Pod="coredns-674b8bbfcf-c86nr" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:39.811917 kubelet[2501]: E1108 00:21:39.810269 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:39.819378 kubelet[2501]: E1108 00:21:39.819098 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:39.870734 containerd[1463]: time="2025-11-08T00:21:39.870254759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:21:39.870734 containerd[1463]: time="2025-11-08T00:21:39.870348929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:21:39.870734 containerd[1463]: time="2025-11-08T00:21:39.870398163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:39.870734 containerd[1463]: time="2025-11-08T00:21:39.870509367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:21:39.921889 systemd[1]: run-containerd-runc-k8s.io-9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b-runc.IMyoTz.mount: Deactivated successfully. Nov 8 00:21:39.935427 containerd[1463]: time="2025-11-08T00:21:39.934773657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:39.937796 containerd[1463]: time="2025-11-08T00:21:39.936938355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:39.937796 containerd[1463]: time="2025-11-08T00:21:39.937055240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:39.938041 kubelet[2501]: E1108 00:21:39.937209 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:39.938041 kubelet[2501]: E1108 00:21:39.937264 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:39.938041 kubelet[2501]: E1108 00:21:39.937466 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzvx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fm88z_calico-system(329e0556-11fe-424d-9621-4c503891f4c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:39.938462 systemd[1]: Started cri-containerd-9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b.scope - libcontainer container 9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b. Nov 8 00:21:39.941344 kubelet[2501]: E1108 00:21:39.941018 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:21:40.018763 containerd[1463]: time="2025-11-08T00:21:40.018494723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c86nr,Uid:bf32dec8-2497-4f2e-91ee-003d9b7723b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b\"" Nov 8 00:21:40.021793 kubelet[2501]: E1108 00:21:40.021737 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:40.029881 containerd[1463]: time="2025-11-08T00:21:40.029285565Z" level=info msg="CreateContainer within sandbox \"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:21:40.041424 containerd[1463]: time="2025-11-08T00:21:40.041362755Z" level=info msg="CreateContainer within sandbox \"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95f887fd3fd0852846c04b35837df49561dd587616979eca1ac9781c4ef3e0b0\"" Nov 8 00:21:40.042293 containerd[1463]: time="2025-11-08T00:21:40.042241360Z" level=info msg="StartContainer for \"95f887fd3fd0852846c04b35837df49561dd587616979eca1ac9781c4ef3e0b0\"" Nov 8 00:21:40.063102 systemd-networkd[1367]: calif044e612c2c: Gained IPv6LL Nov 8 00:21:40.094221 systemd[1]: Started cri-containerd-95f887fd3fd0852846c04b35837df49561dd587616979eca1ac9781c4ef3e0b0.scope - libcontainer container 95f887fd3fd0852846c04b35837df49561dd587616979eca1ac9781c4ef3e0b0. Nov 8 00:21:40.132396 containerd[1463]: time="2025-11-08T00:21:40.132353875Z" level=info msg="StartContainer for \"95f887fd3fd0852846c04b35837df49561dd587616979eca1ac9781c4ef3e0b0\" returns successfully" Nov 8 00:21:40.319188 systemd-networkd[1367]: califc797b3cc36: Gained IPv6LL Nov 8 00:21:40.824900 kubelet[2501]: E1108 00:21:40.821367 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:40.824900 kubelet[2501]: E1108 00:21:40.822559 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:40.831337 kubelet[2501]: E1108 00:21:40.831282 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:21:40.831762 kubelet[2501]: E1108 00:21:40.831739 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:40.886634 kubelet[2501]: I1108 00:21:40.886478 2501 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c86nr" podStartSLOduration=44.88645716 podStartE2EDuration="44.88645716s" podCreationTimestamp="2025-11-08 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:21:40.865686926 +0000 UTC m=+51.676250031" watchObservedRunningTime="2025-11-08 00:21:40.88645716 +0000 UTC m=+51.697020263" Nov 8 00:21:41.471141 systemd-networkd[1367]: calib87103d6f71: Gained IPv6LL Nov 8 00:21:41.824643 kubelet[2501]: E1108 00:21:41.824370 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:42.827346 kubelet[2501]: E1108 00:21:42.826797 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:21:47.080262 systemd[1]: Started sshd@7-64.23.225.39:22-139.178.68.195:49794.service - OpenSSH per-connection server daemon (139.178.68.195:49794). Nov 8 00:21:47.178240 sshd[4811]: Accepted publickey for core from 139.178.68.195 port 49794 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:47.183058 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:47.191286 systemd-logind[1440]: New session 8 of user core. Nov 8 00:21:47.197153 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:21:47.733282 sshd[4811]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:47.737341 systemd[1]: sshd@7-64.23.225.39:22-139.178.68.195:49794.service: Deactivated successfully. Nov 8 00:21:47.740115 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:21:47.742449 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:21:47.744034 systemd-logind[1440]: Removed session 8. Nov 8 00:21:48.373551 containerd[1463]: time="2025-11-08T00:21:48.373440824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:21:48.728163 containerd[1463]: time="2025-11-08T00:21:48.728000572Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:48.728968 containerd[1463]: time="2025-11-08T00:21:48.728917920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:21:48.729088 containerd[1463]: time="2025-11-08T00:21:48.728980935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:21:48.729323 kubelet[2501]: E1108 00:21:48.729267 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:48.729694 kubelet[2501]: E1108 00:21:48.729345 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:21:48.729694 kubelet[2501]: E1108 00:21:48.729471 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6531dc8e84ac47b0b0ceb8f72d200569,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:48.731951 containerd[1463]: time="2025-11-08T00:21:48.731905019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:21:49.052955 containerd[1463]: time="2025-11-08T00:21:49.052735368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:49.053986 containerd[1463]: time="2025-11-08T00:21:49.053924119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:21:49.054559 containerd[1463]: time="2025-11-08T00:21:49.053955879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:49.054620 kubelet[2501]: E1108 00:21:49.054260 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:49.054620 kubelet[2501]: E1108 00:21:49.054327 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:21:49.054620 kubelet[2501]: E1108 00:21:49.054475 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:49.056337 kubelet[2501]: E1108 00:21:49.056277 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:21:49.385001 containerd[1463]: time="2025-11-08T00:21:49.384368618Z" level=info msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.436 [WARNING][4835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf32dec8-2497-4f2e-91ee-003d9b7723b4", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b", Pod:"coredns-674b8bbfcf-c86nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib87103d6f71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.437 [INFO][4835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.437 [INFO][4835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" iface="eth0" netns="" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.437 [INFO][4835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.437 [INFO][4835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.482 [INFO][4842] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.482 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.482 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.492 [WARNING][4842] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.492 [INFO][4842] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.494 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:49.498940 containerd[1463]: 2025-11-08 00:21:49.496 [INFO][4835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.500308 containerd[1463]: time="2025-11-08T00:21:49.500074161Z" level=info msg="TearDown network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" successfully" Nov 8 00:21:49.500308 containerd[1463]: time="2025-11-08T00:21:49.500159850Z" level=info msg="StopPodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" returns successfully" Nov 8 00:21:49.501660 containerd[1463]: time="2025-11-08T00:21:49.501150424Z" level=info msg="RemovePodSandbox for \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" Nov 8 00:21:49.501660 containerd[1463]: time="2025-11-08T00:21:49.501199380Z" level=info msg="Forcibly stopping sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\"" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.555 [WARNING][4856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf32dec8-2497-4f2e-91ee-003d9b7723b4", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"9379ba766a52ef435ac5e93ae3d649f6d8e3f86f529a766c14415938e6b2bc9b", Pod:"coredns-674b8bbfcf-c86nr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib87103d6f71", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.555 [INFO][4856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.555 [INFO][4856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" iface="eth0" netns="" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.555 [INFO][4856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.555 [INFO][4856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.586 [INFO][4863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.586 [INFO][4863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.586 [INFO][4863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.596 [WARNING][4863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.596 [INFO][4863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" HandleID="k8s-pod-network.37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--c86nr-eth0" Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.598 [INFO][4863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:49.602863 containerd[1463]: 2025-11-08 00:21:49.600 [INFO][4856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891" Nov 8 00:21:49.604067 containerd[1463]: time="2025-11-08T00:21:49.603695973Z" level=info msg="TearDown network for sandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" successfully" Nov 8 00:21:49.608411 containerd[1463]: time="2025-11-08T00:21:49.608172101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:49.608411 containerd[1463]: time="2025-11-08T00:21:49.608249152Z" level=info msg="RemovePodSandbox \"37fbfe3d10692ced440a2a946babaf48cf9f5941063a82007292ae6fb4524891\" returns successfully" Nov 8 00:21:49.609057 containerd[1463]: time="2025-11-08T00:21:49.609022674Z" level=info msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.656 [WARNING][4877] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71de4983-7c24-4272-8fa7-0a4b5407d2c0", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b", Pod:"csi-node-driver-ldqsl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5946011217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.657 [INFO][4877] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.657 [INFO][4877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" iface="eth0" netns="" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.657 [INFO][4877] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.657 [INFO][4877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.688 [INFO][4884] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.688 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.688 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.696 [WARNING][4884] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.696 [INFO][4884] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.699 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:49.704438 containerd[1463]: 2025-11-08 00:21:49.701 [INFO][4877] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.704438 containerd[1463]: time="2025-11-08T00:21:49.703926069Z" level=info msg="TearDown network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" successfully" Nov 8 00:21:49.704438 containerd[1463]: time="2025-11-08T00:21:49.703963923Z" level=info msg="StopPodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" returns successfully" Nov 8 00:21:49.707264 containerd[1463]: time="2025-11-08T00:21:49.707220213Z" level=info msg="RemovePodSandbox for \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" Nov 8 00:21:49.707410 containerd[1463]: time="2025-11-08T00:21:49.707270930Z" level=info msg="Forcibly stopping sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\"" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.756 [WARNING][4898] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71de4983-7c24-4272-8fa7-0a4b5407d2c0", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"48e04c66906424f7f49f0bc2f32c7bea12bea17e6c6e1fba23dbda30f556799b", Pod:"csi-node-driver-ldqsl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5946011217", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.756 [INFO][4898] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.756 [INFO][4898] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" iface="eth0" netns="" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.756 [INFO][4898] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.756 [INFO][4898] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.787 [INFO][4905] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.788 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.788 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.795 [WARNING][4905] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.796 [INFO][4905] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" HandleID="k8s-pod-network.2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Workload="ci--4081.3.6--n--f4234a6c60-k8s-csi--node--driver--ldqsl-eth0" Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.798 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:49.803027 containerd[1463]: 2025-11-08 00:21:49.800 [INFO][4898] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d" Nov 8 00:21:49.803603 containerd[1463]: time="2025-11-08T00:21:49.803124089Z" level=info msg="TearDown network for sandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" successfully" Nov 8 00:21:49.806075 containerd[1463]: time="2025-11-08T00:21:49.806015010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:49.806075 containerd[1463]: time="2025-11-08T00:21:49.806083635Z" level=info msg="RemovePodSandbox \"2fb0375b9c65fb026c3d53fa8caa4fc1dd7e5761f044f560ea6ee5134f795a7d\" returns successfully" Nov 8 00:21:49.807840 containerd[1463]: time="2025-11-08T00:21:49.806952256Z" level=info msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.864 [WARNING][4919] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25d1e64e-eeda-401a-ad8a-78903d2ff60f", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6", Pod:"coredns-674b8bbfcf-h5wgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic814f348c90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.865 [INFO][4919] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.865 [INFO][4919] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" iface="eth0" netns="" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.865 [INFO][4919] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.865 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.916 [INFO][4926] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.917 [INFO][4926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.917 [INFO][4926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.930 [WARNING][4926] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.930 [INFO][4926] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.933 [INFO][4926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:49.940915 containerd[1463]: 2025-11-08 00:21:49.937 [INFO][4919] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:49.940915 containerd[1463]: time="2025-11-08T00:21:49.940859172Z" level=info msg="TearDown network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" successfully" Nov 8 00:21:49.942188 containerd[1463]: time="2025-11-08T00:21:49.940945708Z" level=info msg="StopPodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" returns successfully" Nov 8 00:21:49.942306 containerd[1463]: time="2025-11-08T00:21:49.942277939Z" level=info msg="RemovePodSandbox for \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" Nov 8 00:21:49.942347 containerd[1463]: time="2025-11-08T00:21:49.942313958Z" level=info msg="Forcibly stopping sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\"" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.000 [WARNING][4941] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"25d1e64e-eeda-401a-ad8a-78903d2ff60f", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"66313b12934647632a1b1c2a64808aa595f7381a8cc22c50dc2231e5308327a6", Pod:"coredns-674b8bbfcf-h5wgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic814f348c90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.000 [INFO][4941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.000 [INFO][4941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" iface="eth0" netns="" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.000 [INFO][4941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.000 [INFO][4941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.036 [INFO][4950] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.036 [INFO][4950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.036 [INFO][4950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.045 [WARNING][4950] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.045 [INFO][4950] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" HandleID="k8s-pod-network.5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Workload="ci--4081.3.6--n--f4234a6c60-k8s-coredns--674b8bbfcf--h5wgp-eth0" Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.048 [INFO][4950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.054068 containerd[1463]: 2025-11-08 00:21:50.051 [INFO][4941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435" Nov 8 00:21:50.055737 containerd[1463]: time="2025-11-08T00:21:50.054031027Z" level=info msg="TearDown network for sandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" successfully" Nov 8 00:21:50.059527 containerd[1463]: time="2025-11-08T00:21:50.059450738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:50.059729 containerd[1463]: time="2025-11-08T00:21:50.059554149Z" level=info msg="RemovePodSandbox \"5014d2ddbdc7df72a8fd90c584af983847eba36d6a88a719eee03065bcf9d435\" returns successfully" Nov 8 00:21:50.060973 containerd[1463]: time="2025-11-08T00:21:50.060932794Z" level=info msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.119 [WARNING][4965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"747a1d48-6b9b-4ad4-aae6-7e918f295a7f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b", Pod:"calico-apiserver-596fc4bd76-8l5hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d01887687d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.119 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.119 [INFO][4965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" iface="eth0" netns="" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.119 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.119 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.155 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.155 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.155 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.167 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.167 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.169 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.174294 containerd[1463]: 2025-11-08 00:21:50.172 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.176973 containerd[1463]: time="2025-11-08T00:21:50.174365731Z" level=info msg="TearDown network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" successfully" Nov 8 00:21:50.176973 containerd[1463]: time="2025-11-08T00:21:50.174404376Z" level=info msg="StopPodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" returns successfully" Nov 8 00:21:50.176973 containerd[1463]: time="2025-11-08T00:21:50.175344958Z" level=info msg="RemovePodSandbox for \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" Nov 8 00:21:50.176973 containerd[1463]: time="2025-11-08T00:21:50.175394470Z" level=info msg="Forcibly stopping sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\"" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.232 [WARNING][4987] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"747a1d48-6b9b-4ad4-aae6-7e918f295a7f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"49cf46862b3508c8be8c6bd1519e8f27ad7e4ae912e391b6c1dc234a3aa4561b", Pod:"calico-apiserver-596fc4bd76-8l5hn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d01887687d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.233 [INFO][4987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.233 [INFO][4987] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" iface="eth0" netns="" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.233 [INFO][4987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.233 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.267 [INFO][4994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.267 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.267 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.276 [WARNING][4994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.276 [INFO][4994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" HandleID="k8s-pod-network.e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--8l5hn-eth0" Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.279 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.285280 containerd[1463]: 2025-11-08 00:21:50.282 [INFO][4987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45" Nov 8 00:21:50.286115 containerd[1463]: time="2025-11-08T00:21:50.285388239Z" level=info msg="TearDown network for sandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" successfully" Nov 8 00:21:50.289395 containerd[1463]: time="2025-11-08T00:21:50.289324517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:50.289546 containerd[1463]: time="2025-11-08T00:21:50.289418646Z" level=info msg="RemovePodSandbox \"e3d3a9794e8832f17060aa759c8b13353c290fe2812f5b72fc5a74ceb9406e45\" returns successfully" Nov 8 00:21:50.290178 containerd[1463]: time="2025-11-08T00:21:50.290139450Z" level=info msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" Nov 8 00:21:50.376344 containerd[1463]: time="2025-11-08T00:21:50.376306072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.341 [WARNING][5008] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.342 [INFO][5008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.342 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" iface="eth0" netns="" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.342 [INFO][5008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.342 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.395 [INFO][5015] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.395 [INFO][5015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.395 [INFO][5015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.405 [WARNING][5015] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.405 [INFO][5015] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.407 [INFO][5015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.413015 containerd[1463]: 2025-11-08 00:21:50.410 [INFO][5008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.413826 containerd[1463]: time="2025-11-08T00:21:50.413071872Z" level=info msg="TearDown network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" successfully" Nov 8 00:21:50.413826 containerd[1463]: time="2025-11-08T00:21:50.413100015Z" level=info msg="StopPodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" returns successfully" Nov 8 00:21:50.415227 containerd[1463]: time="2025-11-08T00:21:50.414475858Z" level=info msg="RemovePodSandbox for \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" Nov 8 00:21:50.415227 containerd[1463]: time="2025-11-08T00:21:50.414517408Z" level=info msg="Forcibly stopping sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\"" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.475 [WARNING][5029] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" WorkloadEndpoint="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.475 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.475 [INFO][5029] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" iface="eth0" netns="" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.475 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.475 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.511 [INFO][5036] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.512 [INFO][5036] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.512 [INFO][5036] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.521 [WARNING][5036] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.521 [INFO][5036] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" HandleID="k8s-pod-network.f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Workload="ci--4081.3.6--n--f4234a6c60-k8s-whisker--86784b5f66--2xvmd-eth0" Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.524 [INFO][5036] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.529917 containerd[1463]: 2025-11-08 00:21:50.526 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776" Nov 8 00:21:50.529917 containerd[1463]: time="2025-11-08T00:21:50.529239281Z" level=info msg="TearDown network for sandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" successfully" Nov 8 00:21:50.533214 containerd[1463]: time="2025-11-08T00:21:50.533063592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:50.533214 containerd[1463]: time="2025-11-08T00:21:50.533165354Z" level=info msg="RemovePodSandbox \"f8d12e4c9b0bb99fc5220ec771f551a3c10a7f09de878434b40bd126c19a8776\" returns successfully" Nov 8 00:21:50.534351 containerd[1463]: time="2025-11-08T00:21:50.534127947Z" level=info msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.587 [WARNING][5050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"329e0556-11fe-424d-9621-4c503891f4c4", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3", Pod:"goldmane-666569f655-fm88z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif044e612c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.588 [INFO][5050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.588 [INFO][5050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" iface="eth0" netns="" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.588 [INFO][5050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.588 [INFO][5050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.622 [INFO][5057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.623 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.623 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.641 [WARNING][5057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.641 [INFO][5057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.644 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.649666 containerd[1463]: 2025-11-08 00:21:50.646 [INFO][5050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.649666 containerd[1463]: time="2025-11-08T00:21:50.649548567Z" level=info msg="TearDown network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" successfully" Nov 8 00:21:50.649666 containerd[1463]: time="2025-11-08T00:21:50.649585490Z" level=info msg="StopPodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" returns successfully" Nov 8 00:21:50.651232 containerd[1463]: time="2025-11-08T00:21:50.651178002Z" level=info msg="RemovePodSandbox for \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" Nov 8 00:21:50.651232 containerd[1463]: time="2025-11-08T00:21:50.651227842Z" level=info msg="Forcibly stopping sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\"" Nov 8 00:21:50.740128 containerd[1463]: time="2025-11-08T00:21:50.740067685Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:50.740990 containerd[1463]: time="2025-11-08T00:21:50.740944772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:21:50.741105 containerd[1463]: time="2025-11-08T00:21:50.741049496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:21:50.742150 kubelet[2501]: E1108 00:21:50.742097 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:50.743093 kubelet[2501]: E1108 00:21:50.742165 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:21:50.743093 kubelet[2501]: E1108 00:21:50.742367 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:50.747811 containerd[1463]: time="2025-11-08T00:21:50.747444722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.708 [WARNING][5072] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"329e0556-11fe-424d-9621-4c503891f4c4", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"ed7f53b07fde7ae2c43e0a32b78b2371c852721ec88249675ac21a533c3119d3", Pod:"goldmane-666569f655-fm88z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif044e612c2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.708 [INFO][5072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.708 [INFO][5072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" iface="eth0" netns="" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.708 [INFO][5072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.708 [INFO][5072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.765 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.766 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.766 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.776 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.777 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" HandleID="k8s-pod-network.d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Workload="ci--4081.3.6--n--f4234a6c60-k8s-goldmane--666569f655--fm88z-eth0" Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.780 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.789215 containerd[1463]: 2025-11-08 00:21:50.785 [INFO][5072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c" Nov 8 00:21:50.790388 containerd[1463]: time="2025-11-08T00:21:50.789251164Z" level=info msg="TearDown network for sandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" successfully" Nov 8 00:21:50.793690 containerd[1463]: time="2025-11-08T00:21:50.793559888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:50.793911 containerd[1463]: time="2025-11-08T00:21:50.793721637Z" level=info msg="RemovePodSandbox \"d2fa9891180b8f061455775b85e6c1d4e56fd7c01b4fcbfb4195375517dbe81c\" returns successfully" Nov 8 00:21:50.794597 containerd[1463]: time="2025-11-08T00:21:50.794571684Z" level=info msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.853 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0", GenerateName:"calico-kube-controllers-bc684977b-", Namespace:"calico-system", SelfLink:"", UID:"10d0265b-9d98-419e-98b8-ef3078177b60", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc684977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208", Pod:"calico-kube-controllers-bc684977b-dwfpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc797b3cc36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.853 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.853 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" iface="eth0" netns="" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.853 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.853 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.893 [INFO][5100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.893 [INFO][5100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.893 [INFO][5100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.901 [WARNING][5100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.901 [INFO][5100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.903 [INFO][5100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:50.907680 containerd[1463]: 2025-11-08 00:21:50.905 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:50.907680 containerd[1463]: time="2025-11-08T00:21:50.907481579Z" level=info msg="TearDown network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" successfully" Nov 8 00:21:50.907680 containerd[1463]: time="2025-11-08T00:21:50.907510017Z" level=info msg="StopPodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" returns successfully" Nov 8 00:21:50.909241 containerd[1463]: time="2025-11-08T00:21:50.908650299Z" level=info msg="RemovePodSandbox for \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" Nov 8 00:21:50.909241 containerd[1463]: time="2025-11-08T00:21:50.908683466Z" level=info msg="Forcibly stopping sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\"" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.953 [WARNING][5114] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0", GenerateName:"calico-kube-controllers-bc684977b-", Namespace:"calico-system", SelfLink:"", UID:"10d0265b-9d98-419e-98b8-ef3078177b60", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bc684977b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"3486ebb938544a575eb717b9393cb9fc7a9270f4d8fb9f073f310584d614f208", Pod:"calico-kube-controllers-bc684977b-dwfpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califc797b3cc36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.954 [INFO][5114] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.954 [INFO][5114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" iface="eth0" netns="" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.954 [INFO][5114] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.954 [INFO][5114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.983 [INFO][5121] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.983 [INFO][5121] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.983 [INFO][5121] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.996 [WARNING][5121] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.996 [INFO][5121] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" HandleID="k8s-pod-network.3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--kube--controllers--bc684977b--dwfpx-eth0" Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:50.998 [INFO][5121] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:51.003235 containerd[1463]: 2025-11-08 00:21:51.001 [INFO][5114] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05" Nov 8 00:21:51.003726 containerd[1463]: time="2025-11-08T00:21:51.003284620Z" level=info msg="TearDown network for sandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" successfully" Nov 8 00:21:51.007308 containerd[1463]: time="2025-11-08T00:21:51.007236548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:51.007996 containerd[1463]: time="2025-11-08T00:21:51.007327017Z" level=info msg="RemovePodSandbox \"3aa5734e32fb91699372a70f7c39656242f190bd32ab54489341cb88f8fb1b05\" returns successfully" Nov 8 00:21:51.008764 containerd[1463]: time="2025-11-08T00:21:51.008303800Z" level=info msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" Nov 8 00:21:51.098923 containerd[1463]: time="2025-11-08T00:21:51.098856042Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:51.099782 containerd[1463]: time="2025-11-08T00:21:51.099638855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:21:51.099920 containerd[1463]: time="2025-11-08T00:21:51.099768869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:21:51.100446 kubelet[2501]: E1108 00:21:51.100397 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:51.100981 kubelet[2501]: E1108 00:21:51.100677 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:21:51.101856 kubelet[2501]: E1108 00:21:51.101085 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:51.103751 kubelet[2501]: E1108 00:21:51.103322 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.057 [WARNING][5135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b72e8b4-2554-45c8-82a8-87c096020fee", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b", Pod:"calico-apiserver-596fc4bd76-bh4h9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ef27cc135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.057 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.057 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" iface="eth0" netns="" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.057 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.057 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.091 [INFO][5142] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.091 [INFO][5142] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.091 [INFO][5142] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.107 [WARNING][5142] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.107 [INFO][5142] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.109 [INFO][5142] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:51.116324 containerd[1463]: 2025-11-08 00:21:51.112 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.117306 containerd[1463]: time="2025-11-08T00:21:51.116375736Z" level=info msg="TearDown network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" successfully" Nov 8 00:21:51.117306 containerd[1463]: time="2025-11-08T00:21:51.116411211Z" level=info msg="StopPodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" returns successfully" Nov 8 00:21:51.117306 containerd[1463]: time="2025-11-08T00:21:51.117151471Z" level=info msg="RemovePodSandbox for \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" Nov 8 00:21:51.117306 containerd[1463]: time="2025-11-08T00:21:51.117279016Z" level=info msg="Forcibly stopping sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\"" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.169 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0", GenerateName:"calico-apiserver-596fc4bd76-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b72e8b4-2554-45c8-82a8-87c096020fee", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 21, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"596fc4bd76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f4234a6c60", ContainerID:"5b8255cdee92f64dd870ba4c81122d5b045937444e92d626ef7e80114d44832b", Pod:"calico-apiserver-596fc4bd76-bh4h9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06ef27cc135", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.169 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.169 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" iface="eth0" netns="" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.169 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.169 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.194 [INFO][5163] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.195 [INFO][5163] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.195 [INFO][5163] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.204 [WARNING][5163] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.204 [INFO][5163] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" HandleID="k8s-pod-network.11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Workload="ci--4081.3.6--n--f4234a6c60-k8s-calico--apiserver--596fc4bd76--bh4h9-eth0" Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.206 [INFO][5163] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:21:51.211022 containerd[1463]: 2025-11-08 00:21:51.209 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867" Nov 8 00:21:51.216998 containerd[1463]: time="2025-11-08T00:21:51.216168847Z" level=info msg="TearDown network for sandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" successfully" Nov 8 00:21:51.218883 containerd[1463]: time="2025-11-08T00:21:51.218832620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:21:51.219189 containerd[1463]: time="2025-11-08T00:21:51.219112033Z" level=info msg="RemovePodSandbox \"11fe06bff9b269a3cb4dc0b9246a73152d95d44bafaf2d4ad9275bffa53d2867\" returns successfully" Nov 8 00:21:51.378174 containerd[1463]: time="2025-11-08T00:21:51.377785278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:21:51.692252 containerd[1463]: time="2025-11-08T00:21:51.692172144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:51.692944 containerd[1463]: time="2025-11-08T00:21:51.692908687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:21:51.693034 containerd[1463]: time="2025-11-08T00:21:51.692956931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:51.693277 kubelet[2501]: E1108 00:21:51.693221 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:51.693382 kubelet[2501]: E1108 00:21:51.693300 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:21:51.693853 kubelet[2501]: E1108 00:21:51.693496 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzvx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fm88z_calico-system(329e0556-11fe-424d-9621-4c503891f4c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:51.695108 kubelet[2501]: E1108 00:21:51.695061 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:21:52.374483 containerd[1463]: time="2025-11-08T00:21:52.374032949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:52.526221 systemd[1]: Started sshd@8-64.23.225.39:22-140.233.190.96:37612.service - OpenSSH per-connection server daemon (140.233.190.96:37612). Nov 8 00:21:52.715786 containerd[1463]: time="2025-11-08T00:21:52.715532544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:52.717711 containerd[1463]: time="2025-11-08T00:21:52.716541838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:52.717711 containerd[1463]: time="2025-11-08T00:21:52.716647827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:52.717939 kubelet[2501]: E1108 00:21:52.716802 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:52.717939 kubelet[2501]: E1108 00:21:52.716849 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:52.717939 kubelet[2501]: E1108 00:21:52.717124 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w62mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-8l5hn_calico-apiserver(747a1d48-6b9b-4ad4-aae6-7e918f295a7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:52.718633 kubelet[2501]: E1108 00:21:52.718606 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:21:52.718805 containerd[1463]: time="2025-11-08T00:21:52.718778162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:21:52.750354 systemd[1]: Started sshd@9-64.23.225.39:22-139.178.68.195:49806.service - OpenSSH per-connection server daemon (139.178.68.195:49806). Nov 8 00:21:52.807626 sshd[5172]: Accepted publickey for core from 139.178.68.195 port 49806 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:52.810523 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:52.818153 systemd-logind[1440]: New session 9 of user core. Nov 8 00:21:52.824527 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:21:53.014394 sshd[5172]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:53.021484 systemd[1]: sshd@9-64.23.225.39:22-139.178.68.195:49806.service: Deactivated successfully. Nov 8 00:21:53.025898 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:21:53.027210 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:21:53.028677 systemd-logind[1440]: Removed session 9. Nov 8 00:21:53.071495 containerd[1463]: time="2025-11-08T00:21:53.071410582Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:53.072233 containerd[1463]: time="2025-11-08T00:21:53.072172507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:21:53.072438 containerd[1463]: time="2025-11-08T00:21:53.072213277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:21:53.072604 kubelet[2501]: E1108 00:21:53.072544 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:53.072690 kubelet[2501]: E1108 00:21:53.072629 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:21:53.072904 kubelet[2501]: E1108 00:21:53.072828 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwdzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bc684977b-dwfpx_calico-system(10d0265b-9d98-419e-98b8-ef3078177b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:53.074500 kubelet[2501]: E1108 00:21:53.074439 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:21:53.376090 containerd[1463]: time="2025-11-08T00:21:53.375577551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:21:53.725526 containerd[1463]: time="2025-11-08T00:21:53.725381527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:21:53.726131 containerd[1463]: time="2025-11-08T00:21:53.726065822Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:21:53.726215 containerd[1463]: time="2025-11-08T00:21:53.726170778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:21:53.726404 kubelet[2501]: E1108 00:21:53.726360 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:53.726702 kubelet[2501]: E1108 00:21:53.726416 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:21:53.726702 kubelet[2501]: E1108 00:21:53.726564 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9kgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-bh4h9_calico-apiserver(9b72e8b4-2554-45c8-82a8-87c096020fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:21:53.727841 kubelet[2501]: E1108 00:21:53.727778 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:21:58.040414 systemd[1]: Started sshd@10-64.23.225.39:22-139.178.68.195:56244.service - OpenSSH per-connection server daemon (139.178.68.195:56244). Nov 8 00:21:58.091383 sshd[5195]: Accepted publickey for core from 139.178.68.195 port 56244 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:58.094491 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:58.100996 systemd-logind[1440]: New session 10 of user core. Nov 8 00:21:58.106195 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:21:58.284477 sshd[5195]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:58.296767 systemd[1]: sshd@10-64.23.225.39:22-139.178.68.195:56244.service: Deactivated successfully. Nov 8 00:21:58.300498 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:21:58.304613 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:21:58.314438 systemd[1]: Started sshd@11-64.23.225.39:22-139.178.68.195:56256.service - OpenSSH per-connection server daemon (139.178.68.195:56256). Nov 8 00:21:58.316295 systemd-logind[1440]: Removed session 10. Nov 8 00:21:58.379368 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 56256 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:58.386582 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:58.399242 systemd-logind[1440]: New session 11 of user core. Nov 8 00:21:58.408177 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:21:58.646508 sshd[5209]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:58.661340 systemd[1]: sshd@11-64.23.225.39:22-139.178.68.195:56256.service: Deactivated successfully. Nov 8 00:21:58.666504 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:21:58.670284 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:21:58.679263 systemd[1]: Started sshd@12-64.23.225.39:22-139.178.68.195:56270.service - OpenSSH per-connection server daemon (139.178.68.195:56270). Nov 8 00:21:58.684981 systemd-logind[1440]: Removed session 11. Nov 8 00:21:58.758599 sshd[5220]: Accepted publickey for core from 139.178.68.195 port 56270 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:21:58.763770 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:21:58.771628 systemd-logind[1440]: New session 12 of user core. Nov 8 00:21:58.779625 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:21:58.939748 sshd[5220]: pam_unix(sshd:session): session closed for user core Nov 8 00:21:58.945954 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:21:58.946522 systemd[1]: sshd@12-64.23.225.39:22-139.178.68.195:56270.service: Deactivated successfully. Nov 8 00:21:58.949993 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:21:58.951437 systemd-logind[1440]: Removed session 12. Nov 8 00:22:01.879135 kubelet[2501]: E1108 00:22:01.879070 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:02.514650 sshd[5170]: Connection closed by authenticating user root 140.233.190.96 port 37612 [preauth] Nov 8 00:22:02.517199 systemd[1]: sshd@8-64.23.225.39:22-140.233.190.96:37612.service: Deactivated successfully. Nov 8 00:22:03.375828 kubelet[2501]: E1108 00:22:03.375503 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:22:03.959261 systemd[1]: Started sshd@13-64.23.225.39:22-139.178.68.195:55282.service - OpenSSH per-connection server daemon (139.178.68.195:55282). Nov 8 00:22:04.004028 sshd[5262]: Accepted publickey for core from 139.178.68.195 port 55282 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:04.008579 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:04.015765 systemd-logind[1440]: New session 13 of user core. Nov 8 00:22:04.024202 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:22:04.180440 sshd[5262]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:04.185537 systemd[1]: sshd@13-64.23.225.39:22-139.178.68.195:55282.service: Deactivated successfully. Nov 8 00:22:04.188383 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:22:04.191240 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:22:04.192529 systemd-logind[1440]: Removed session 13. Nov 8 00:22:04.374353 kubelet[2501]: E1108 00:22:04.374218 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:22:04.374353 kubelet[2501]: E1108 00:22:04.374265 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:22:05.374566 kubelet[2501]: E1108 00:22:05.374194 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:22:06.375764 kubelet[2501]: E1108 00:22:06.375697 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:22:07.375923 kubelet[2501]: E1108 00:22:07.374095 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:07.378907 kubelet[2501]: E1108 00:22:07.378551 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:22:08.372685 kubelet[2501]: E1108 00:22:08.372624 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:08.373034 kubelet[2501]: E1108 00:22:08.372908 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:09.201316 systemd[1]: Started sshd@14-64.23.225.39:22-139.178.68.195:55284.service - OpenSSH per-connection server daemon (139.178.68.195:55284). Nov 8 00:22:09.256133 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 55284 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:09.256888 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:09.263552 systemd-logind[1440]: New session 14 of user core. Nov 8 00:22:09.274422 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:22:09.443678 sshd[5281]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:09.449138 systemd[1]: sshd@14-64.23.225.39:22-139.178.68.195:55284.service: Deactivated successfully. Nov 8 00:22:09.453979 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:22:09.457601 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:22:09.460491 systemd-logind[1440]: Removed session 14. Nov 8 00:22:10.189244 systemd[1]: Started sshd@15-64.23.225.39:22-207.154.232.101:6116.service - OpenSSH per-connection server daemon (207.154.232.101:6116). Nov 8 00:22:14.213220 sshd[5294]: kex_protocol_error: type 20 seq 2 [preauth] Nov 8 00:22:14.213220 sshd[5294]: kex_protocol_error: type 30 seq 3 [preauth] Nov 8 00:22:14.465100 systemd[1]: Started sshd@16-64.23.225.39:22-139.178.68.195:52262.service - OpenSSH per-connection server daemon (139.178.68.195:52262). Nov 8 00:22:14.537537 sshd[5303]: Accepted publickey for core from 139.178.68.195 port 52262 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:14.540802 sshd[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:14.551146 systemd-logind[1440]: New session 15 of user core. Nov 8 00:22:14.556299 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:22:14.769736 sshd[5303]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:14.780772 systemd[1]: sshd@16-64.23.225.39:22-139.178.68.195:52262.service: Deactivated successfully. Nov 8 00:22:14.784011 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:22:14.786457 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:22:14.788520 systemd-logind[1440]: Removed session 15. Nov 8 00:22:15.378607 containerd[1463]: time="2025-11-08T00:22:15.377997281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:22:15.753003 containerd[1463]: time="2025-11-08T00:22:15.752633044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:15.755603 containerd[1463]: time="2025-11-08T00:22:15.755540619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:22:15.755603 containerd[1463]: time="2025-11-08T00:22:15.755560596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:15.757008 kubelet[2501]: E1108 00:22:15.756951 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:15.757482 kubelet[2501]: E1108 00:22:15.757019 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:22:15.757482 kubelet[2501]: E1108 00:22:15.757231 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dzvx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-fm88z_calico-system(329e0556-11fe-424d-9621-4c503891f4c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:15.759218 kubelet[2501]: E1108 00:22:15.759136 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:22:16.148723 sshd[5294]: kex_protocol_error: type 20 seq 4 [preauth] Nov 8 00:22:16.148723 sshd[5294]: kex_protocol_error: type 30 seq 5 [preauth] Nov 8 00:22:16.374539 containerd[1463]: time="2025-11-08T00:22:16.374169019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:16.723139 containerd[1463]: time="2025-11-08T00:22:16.723071274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:16.723891 containerd[1463]: time="2025-11-08T00:22:16.723833718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:16.723988 containerd[1463]: time="2025-11-08T00:22:16.723929841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:16.724556 kubelet[2501]: E1108 00:22:16.724215 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:16.724556 kubelet[2501]: E1108 00:22:16.724282 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:16.724556 kubelet[2501]: E1108 00:22:16.724480 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w62mc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-8l5hn_calico-apiserver(747a1d48-6b9b-4ad4-aae6-7e918f295a7f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:16.725939 kubelet[2501]: E1108 00:22:16.725864 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:22:17.376128 containerd[1463]: time="2025-11-08T00:22:17.374822666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:22:17.728174 containerd[1463]: time="2025-11-08T00:22:17.727909945Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:17.729112 containerd[1463]: time="2025-11-08T00:22:17.728711484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:22:17.729112 containerd[1463]: time="2025-11-08T00:22:17.728783210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:17.729224 kubelet[2501]: E1108 00:22:17.728997 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:17.729224 kubelet[2501]: E1108 00:22:17.729047 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:22:17.729612 kubelet[2501]: E1108 00:22:17.729231 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cwdzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-bc684977b-dwfpx_calico-system(10d0265b-9d98-419e-98b8-ef3078177b60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:17.730572 kubelet[2501]: E1108 00:22:17.730502 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:22:18.152753 sshd[5294]: kex_protocol_error: type 20 seq 6 [preauth] Nov 8 00:22:18.152753 sshd[5294]: kex_protocol_error: type 30 seq 7 [preauth] Nov 8 00:22:18.375898 containerd[1463]: time="2025-11-08T00:22:18.375210874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:22:18.927771 containerd[1463]: time="2025-11-08T00:22:18.927706275Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:18.928701 containerd[1463]: time="2025-11-08T00:22:18.928589451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:22:18.928701 containerd[1463]: time="2025-11-08T00:22:18.928644751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:22:18.928964 kubelet[2501]: E1108 00:22:18.928905 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:18.929257 kubelet[2501]: E1108 00:22:18.928982 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:22:18.929257 kubelet[2501]: E1108 00:22:18.929155 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6531dc8e84ac47b0b0ceb8f72d200569,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:18.933279 containerd[1463]: time="2025-11-08T00:22:18.933106713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:22:19.280408 containerd[1463]: time="2025-11-08T00:22:19.280104076Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:19.281220 containerd[1463]: time="2025-11-08T00:22:19.280986297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:22:19.281220 containerd[1463]: time="2025-11-08T00:22:19.281087702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:22:19.281367 kubelet[2501]: E1108 00:22:19.281262 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:19.281367 kubelet[2501]: E1108 00:22:19.281322 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:22:19.281515 kubelet[2501]: E1108 00:22:19.281458 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89wrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76544c75c6-2xtdd_calico-system(0e8bedc1-e771-4bb5-bd8c-8fc39604616a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:19.282993 kubelet[2501]: E1108 00:22:19.282930 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:22:19.789385 systemd[1]: Started sshd@17-64.23.225.39:22-139.178.68.195:52270.service - OpenSSH per-connection server daemon (139.178.68.195:52270). Nov 8 00:22:19.886978 sshd[5316]: Accepted publickey for core from 139.178.68.195 port 52270 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:19.890213 sshd[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:19.897664 systemd-logind[1440]: New session 16 of user core. Nov 8 00:22:19.905478 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:22:20.136070 sshd[5316]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:20.141284 systemd[1]: sshd@17-64.23.225.39:22-139.178.68.195:52270.service: Deactivated successfully. Nov 8 00:22:20.143499 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:22:20.145992 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:22:20.152338 systemd[1]: Started sshd@18-64.23.225.39:22-139.178.68.195:52284.service - OpenSSH per-connection server daemon (139.178.68.195:52284). Nov 8 00:22:20.155139 systemd-logind[1440]: Removed session 16. Nov 8 00:22:20.215897 sshd[5331]: Accepted publickey for core from 139.178.68.195 port 52284 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:20.218134 sshd[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:20.226439 systemd-logind[1440]: New session 17 of user core. Nov 8 00:22:20.230200 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:22:20.374908 containerd[1463]: time="2025-11-08T00:22:20.374519284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:22:20.580257 sshd[5331]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:20.601617 systemd[1]: Started sshd@19-64.23.225.39:22-139.178.68.195:52288.service - OpenSSH per-connection server daemon (139.178.68.195:52288). Nov 8 00:22:20.602445 systemd[1]: sshd@18-64.23.225.39:22-139.178.68.195:52284.service: Deactivated successfully. Nov 8 00:22:20.610499 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:22:20.617749 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:22:20.622031 systemd-logind[1440]: Removed session 17. Nov 8 00:22:20.693048 sshd[5340]: Accepted publickey for core from 139.178.68.195 port 52288 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:20.695811 sshd[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:20.702509 systemd-logind[1440]: New session 18 of user core. Nov 8 00:22:20.708206 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:22:20.740276 containerd[1463]: time="2025-11-08T00:22:20.740035857Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:20.742856 containerd[1463]: time="2025-11-08T00:22:20.742548831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:22:20.742856 containerd[1463]: time="2025-11-08T00:22:20.742567379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:22:20.743765 kubelet[2501]: E1108 00:22:20.743333 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:20.743765 kubelet[2501]: E1108 00:22:20.743415 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:22:20.743765 kubelet[2501]: E1108 00:22:20.743644 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9kgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-596fc4bd76-bh4h9_calico-apiserver(9b72e8b4-2554-45c8-82a8-87c096020fee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:20.745386 kubelet[2501]: E1108 00:22:20.745218 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:22:21.379254 containerd[1463]: time="2025-11-08T00:22:21.379174618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:22:21.619062 sshd[5340]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:21.634575 systemd[1]: Started sshd@20-64.23.225.39:22-139.178.68.195:52294.service - OpenSSH per-connection server daemon (139.178.68.195:52294). Nov 8 00:22:21.643090 systemd[1]: sshd@19-64.23.225.39:22-139.178.68.195:52288.service: Deactivated successfully. Nov 8 00:22:21.649527 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:22:21.654375 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:22:21.658338 systemd-logind[1440]: Removed session 18. Nov 8 00:22:21.718028 sshd[5357]: Accepted publickey for core from 139.178.68.195 port 52294 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:21.721702 sshd[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:21.730655 systemd-logind[1440]: New session 19 of user core. Nov 8 00:22:21.736228 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:22:21.746405 containerd[1463]: time="2025-11-08T00:22:21.746345906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:21.747297 containerd[1463]: time="2025-11-08T00:22:21.747242496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:22:21.747390 containerd[1463]: time="2025-11-08T00:22:21.747276800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:22:21.747865 kubelet[2501]: E1108 00:22:21.747566 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:21.747865 kubelet[2501]: E1108 00:22:21.747631 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:22:21.747865 kubelet[2501]: E1108 00:22:21.747812 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:21.751247 containerd[1463]: time="2025-11-08T00:22:21.750828367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:22:22.081764 containerd[1463]: time="2025-11-08T00:22:22.081390593Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:22:22.083376 containerd[1463]: time="2025-11-08T00:22:22.083072311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:22:22.083376 containerd[1463]: time="2025-11-08T00:22:22.083176845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:22:22.083530 kubelet[2501]: E1108 00:22:22.083385 2501 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:22.083530 kubelet[2501]: E1108 00:22:22.083465 2501 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:22:22.085138 kubelet[2501]: E1108 00:22:22.083677 2501 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpwkr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-ldqsl_calico-system(71de4983-7c24-4272-8fa7-0a4b5407d2c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:22:22.085138 kubelet[2501]: E1108 00:22:22.085062 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:22:22.096793 sshd[5357]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:22.109929 systemd[1]: sshd@20-64.23.225.39:22-139.178.68.195:52294.service: Deactivated successfully. Nov 8 00:22:22.116379 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:22:22.118236 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:22:22.122912 systemd-logind[1440]: Removed session 19. Nov 8 00:22:22.133473 systemd[1]: Started sshd@21-64.23.225.39:22-139.178.68.195:52308.service - OpenSSH per-connection server daemon (139.178.68.195:52308). Nov 8 00:22:22.182092 sshd[5373]: Accepted publickey for core from 139.178.68.195 port 52308 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:22.184637 sshd[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:22.190728 systemd-logind[1440]: New session 20 of user core. Nov 8 00:22:22.198745 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:22:22.353805 sshd[5373]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:22.361290 systemd[1]: sshd@21-64.23.225.39:22-139.178.68.195:52308.service: Deactivated successfully. Nov 8 00:22:22.364737 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:22:22.366667 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:22:22.367835 systemd-logind[1440]: Removed session 20. Nov 8 00:22:26.372767 kubelet[2501]: E1108 00:22:26.372715 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:27.372649 systemd[1]: Started sshd@22-64.23.225.39:22-139.178.68.195:41418.service - OpenSSH per-connection server daemon (139.178.68.195:41418). Nov 8 00:22:27.439353 sshd[5388]: Accepted publickey for core from 139.178.68.195 port 41418 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:27.442715 sshd[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:27.449334 systemd-logind[1440]: New session 21 of user core. Nov 8 00:22:27.460313 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:22:27.678248 sshd[5388]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:27.691176 systemd[1]: sshd@22-64.23.225.39:22-139.178.68.195:41418.service: Deactivated successfully. Nov 8 00:22:27.693800 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:22:27.694741 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:22:27.695929 systemd-logind[1440]: Removed session 21. Nov 8 00:22:29.376222 kubelet[2501]: E1108 00:22:29.375760 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-8l5hn" podUID="747a1d48-6b9b-4ad4-aae6-7e918f295a7f" Nov 8 00:22:29.376854 kubelet[2501]: E1108 00:22:29.376250 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-fm88z" podUID="329e0556-11fe-424d-9621-4c503891f4c4" Nov 8 00:22:30.374510 kubelet[2501]: E1108 00:22:30.374437 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bc684977b-dwfpx" podUID="10d0265b-9d98-419e-98b8-ef3078177b60" Nov 8 00:22:32.376665 kubelet[2501]: E1108 00:22:32.376599 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76544c75c6-2xtdd" podUID="0e8bedc1-e771-4bb5-bd8c-8fc39604616a" Nov 8 00:22:32.696328 systemd[1]: Started sshd@23-64.23.225.39:22-139.178.68.195:41428.service - OpenSSH per-connection server daemon (139.178.68.195:41428). Nov 8 00:22:32.783648 sshd[5426]: Accepted publickey for core from 139.178.68.195 port 41428 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:32.785222 sshd[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:32.792382 systemd-logind[1440]: New session 22 of user core. Nov 8 00:22:32.799324 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:22:33.041767 sshd[5426]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:33.048540 systemd[1]: sshd@23-64.23.225.39:22-139.178.68.195:41428.service: Deactivated successfully. Nov 8 00:22:33.052804 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:22:33.054321 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:22:33.055631 systemd-logind[1440]: Removed session 22. Nov 8 00:22:33.376401 kubelet[2501]: E1108 00:22:33.376111 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-ldqsl" podUID="71de4983-7c24-4272-8fa7-0a4b5407d2c0" Nov 8 00:22:34.372431 kubelet[2501]: E1108 00:22:34.372375 2501 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Nov 8 00:22:36.374232 kubelet[2501]: E1108 00:22:36.374045 2501 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-596fc4bd76-bh4h9" podUID="9b72e8b4-2554-45c8-82a8-87c096020fee" Nov 8 00:22:38.065383 systemd[1]: Started sshd@24-64.23.225.39:22-139.178.68.195:56082.service - OpenSSH per-connection server daemon (139.178.68.195:56082). Nov 8 00:22:38.112797 sshd[5439]: Accepted publickey for core from 139.178.68.195 port 56082 ssh2: RSA SHA256:5cNj4CPCu/29hvQxpofbbrC+CdBdHe1Ci4K+rvpNmY4 Nov 8 00:22:38.115194 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:22:38.121310 systemd-logind[1440]: New session 23 of user core. Nov 8 00:22:38.128187 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:22:38.303445 sshd[5439]: pam_unix(sshd:session): session closed for user core Nov 8 00:22:38.310382 systemd[1]: sshd@24-64.23.225.39:22-139.178.68.195:56082.service: Deactivated successfully. Nov 8 00:22:38.314151 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:22:38.315277 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:22:38.316437 systemd-logind[1440]: Removed session 23.