Nov 1 00:20:39.041095 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:39.041130 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:39.041144 kernel: BIOS-provided physical RAM map: Nov 1 00:20:39.041154 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:20:39.041164 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:20:39.041173 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:20:39.041185 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 1 00:20:39.041194 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 1 00:20:39.041206 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:20:39.041216 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:20:39.041226 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:20:39.041235 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:20:39.041245 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:20:39.041255 kernel: NX (Execute Disable) protection: active Nov 1 00:20:39.041269 kernel: APIC: Static calls initialized Nov 1 00:20:39.041280 kernel: SMBIOS 3.0.0 present. Nov 1 00:20:39.041291 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 1 00:20:39.041301 kernel: Hypervisor detected: KVM Nov 1 00:20:39.041311 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:20:39.041322 kernel: kvm-clock: using sched offset of 3436819669 cycles Nov 1 00:20:39.041332 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:20:39.041344 kernel: tsc: Detected 2495.312 MHz processor Nov 1 00:20:39.041356 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:39.041371 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:39.041383 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 1 00:20:39.041394 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:20:39.041404 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:39.041415 kernel: Using GB pages for direct mapping Nov 1 00:20:39.041426 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:39.041436 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 1 00:20:39.041447 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041458 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041471 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041482 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 1 00:20:39.041493 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041503 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041514 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041525 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:20:39.041535 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 1 00:20:39.041547 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 1 00:20:39.041564 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 1 00:20:39.041575 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 1 00:20:39.041586 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 1 00:20:39.041597 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 1 00:20:39.041608 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 1 00:20:39.041619 kernel: No NUMA configuration found Nov 1 00:20:39.041632 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 1 00:20:39.041644 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 1 00:20:39.041655 kernel: Zone ranges: Nov 1 00:20:39.041666 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:39.041730 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 1 00:20:39.041742 kernel: Normal empty Nov 1 00:20:39.041753 kernel: Movable zone start for each node Nov 1 00:20:39.041765 kernel: Early memory node ranges Nov 1 00:20:39.041776 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:20:39.041787 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 1 00:20:39.041802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 1 00:20:39.041813 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:39.041824 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:20:39.041835 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:20:39.041847 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:20:39.041858 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:20:39.041869 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:39.041881 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:20:39.041892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:20:39.041906 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:39.041917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:20:39.041928 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:20:39.041939 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:39.041951 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:20:39.041962 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:20:39.041973 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:20:39.041984 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:20:39.041996 kernel: Booting paravirtualized kernel on KVM Nov 1 00:20:39.042009 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:39.042021 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:20:39.042032 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:20:39.042044 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:20:39.042054 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:20:39.042065 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:20:39.042080 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:39.042091 kernel: random: crng init done Nov 1 00:20:39.042103 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:20:39.042117 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:20:39.042128 kernel: Fallback order for Node 0: 0 Nov 1 00:20:39.042139 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 1 00:20:39.042150 kernel: Policy zone: DMA32 Nov 1 00:20:39.042162 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:39.042174 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 125152K reserved, 0K cma-reserved) Nov 1 00:20:39.042185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:20:39.042197 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:39.042210 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:39.042221 kernel: Dynamic Preempt: voluntary Nov 1 00:20:39.042232 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:39.042245 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:39.042256 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:20:39.042268 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:39.042279 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:39.042290 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:39.042302 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:39.042313 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:20:39.042326 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:20:39.042338 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:39.042349 kernel: Console: colour VGA+ 80x25 Nov 1 00:20:39.042360 kernel: printk: console [tty0] enabled Nov 1 00:20:39.042372 kernel: printk: console [ttyS0] enabled Nov 1 00:20:39.042383 kernel: ACPI: Core revision 20230628 Nov 1 00:20:39.042395 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:20:39.042406 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:39.042418 kernel: x2apic enabled Nov 1 00:20:39.042432 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:20:39.042443 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:20:39.042454 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:20:39.042466 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495312) Nov 1 00:20:39.042477 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:20:39.042488 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:20:39.042500 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:20:39.042511 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:39.042535 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:20:39.042547 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:39.042562 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:20:39.042587 kernel: active return thunk: retbleed_return_thunk Nov 1 00:20:39.042612 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:20:39.042629 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:20:39.042645 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:20:39.043779 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:39.043813 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:39.043826 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:39.043838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:39.043850 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:20:39.043862 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:39.043874 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:39.043886 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:39.043913 kernel: landlock: Up and running. Nov 1 00:20:39.043938 kernel: SELinux: Initializing. Nov 1 00:20:39.043954 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:20:39.043966 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:20:39.043979 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:20:39.043991 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:39.044003 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:39.044015 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:39.044027 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:20:39.044039 kernel: ... version: 0 Nov 1 00:20:39.044050 kernel: ... bit width: 48 Nov 1 00:20:39.044064 kernel: ... generic registers: 6 Nov 1 00:20:39.044076 kernel: ... value mask: 0000ffffffffffff Nov 1 00:20:39.044088 kernel: ... max period: 00007fffffffffff Nov 1 00:20:39.044100 kernel: ... fixed-purpose events: 0 Nov 1 00:20:39.044112 kernel: ... event mask: 000000000000003f Nov 1 00:20:39.044123 kernel: signal: max sigframe size: 1776 Nov 1 00:20:39.044135 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:39.044148 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:39.044160 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:39.044174 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:39.044186 kernel: .... node #0, CPUs: #1 Nov 1 00:20:39.044197 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:20:39.044209 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:39.044221 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Nov 1 00:20:39.044233 kernel: devtmpfs: initialized Nov 1 00:20:39.044245 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:39.044257 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:39.044269 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:20:39.044283 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:39.044295 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:39.044307 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:39.044319 kernel: audit: type=2000 audit(1761956437.594:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:39.044331 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:39.044343 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:39.044355 kernel: cpuidle: using governor menu Nov 1 00:20:39.044368 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:39.044381 kernel: dca service started, version 1.12.1 Nov 1 00:20:39.044396 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:20:39.044408 kernel: PCI: Using configuration type 1 for base access Nov 1 00:20:39.044420 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:39.044432 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:39.044444 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:39.044456 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:39.044468 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:39.044480 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:39.044491 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:39.044505 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:39.044517 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:20:39.044529 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:39.044541 kernel: ACPI: Interpreter enabled Nov 1 00:20:39.044553 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:20:39.044564 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:39.044576 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:39.044588 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:20:39.044600 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:20:39.044614 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:20:39.044831 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:20:39.045017 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:20:39.045143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:20:39.045160 kernel: PCI host bridge to bus 0000:00 Nov 1 00:20:39.045279 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:20:39.045388 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:20:39.045501 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:20:39.045606 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 1 00:20:39.048169 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:20:39.048290 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:20:39.048395 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:20:39.048540 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:20:39.048830 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:20:39.049016 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 1 00:20:39.049204 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 1 00:20:39.049369 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 1 00:20:39.049497 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 1 00:20:39.049752 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:20:39.049955 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.050106 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 1 00:20:39.050246 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.050402 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 1 00:20:39.050595 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.050773 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 1 00:20:39.050913 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.051082 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 1 00:20:39.051231 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.051397 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 1 00:20:39.051560 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.051773 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 1 00:20:39.051948 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.052126 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 1 00:20:39.052303 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.052451 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 1 00:20:39.052585 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 1 00:20:39.052763 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 1 00:20:39.052955 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:20:39.053129 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:20:39.053300 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:20:39.053433 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 1 00:20:39.053555 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 1 00:20:39.053858 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:20:39.054022 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:20:39.054202 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 1 00:20:39.054343 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 1 00:20:39.054471 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 00:20:39.054598 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 1 00:20:39.054864 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 1 00:20:39.054997 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:20:39.055158 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:20:39.055343 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 00:20:39.055485 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 1 00:20:39.055609 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 1 00:20:39.055812 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:20:39.055961 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:20:39.056148 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 1 00:20:39.056295 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 1 00:20:39.056444 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 1 00:20:39.056621 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 1 00:20:39.056840 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:20:39.057010 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:20:39.057167 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 1 00:20:39.057299 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 00:20:39.057473 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 1 00:20:39.057649 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:20:39.057921 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:20:39.058110 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 00:20:39.058245 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 1 00:20:39.058370 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 1 00:20:39.058548 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 1 00:20:39.058762 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:20:39.058929 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:20:39.059118 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 1 00:20:39.059282 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 1 00:20:39.059417 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 1 00:20:39.059595 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 1 00:20:39.059815 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:20:39.060002 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:20:39.060027 kernel: acpiphp: Slot [0] registered Nov 1 00:20:39.060169 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 1 00:20:39.060307 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 1 00:20:39.060435 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 1 00:20:39.060562 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 1 00:20:39.060772 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 1 00:20:39.060911 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:20:39.061068 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:20:39.061094 kernel: acpiphp: Slot [0-2] registered Nov 1 00:20:39.061255 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 1 00:20:39.061382 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 1 00:20:39.061503 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:20:39.061520 kernel: acpiphp: Slot [0-3] registered Nov 1 00:20:39.061668 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 1 00:20:39.061922 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:20:39.062094 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:20:39.062113 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:20:39.062127 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:20:39.062145 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:20:39.062157 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:20:39.062169 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:20:39.062181 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:20:39.062194 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:20:39.062206 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:20:39.062218 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:20:39.062230 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:20:39.062242 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:20:39.062257 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:20:39.062269 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:20:39.062281 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:20:39.062293 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:20:39.062306 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:20:39.062318 kernel: iommu: Default domain type: Translated Nov 1 00:20:39.062330 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:39.062343 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:39.062355 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:20:39.062370 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:20:39.062382 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 1 00:20:39.062517 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:20:39.062723 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:20:39.062867 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:20:39.062893 kernel: vgaarb: loaded Nov 1 00:20:39.062912 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:20:39.062931 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:20:39.062955 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:20:39.062974 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:39.062992 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:39.063011 kernel: pnp: PnP ACPI init Nov 1 00:20:39.063176 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:20:39.063198 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:20:39.063211 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:39.063224 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:39.063236 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:20:39.063253 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:20:39.063265 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:39.063277 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:39.063290 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:20:39.063302 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:20:39.063314 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:20:39.063327 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:20:39.063339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:39.063353 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:39.063489 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 00:20:39.063620 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 00:20:39.063866 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 00:20:39.064014 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 00:20:39.064186 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 00:20:39.064315 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 00:20:39.064445 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 1 00:20:39.064578 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:20:39.064824 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:20:39.064965 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 1 00:20:39.065137 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:20:39.065272 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:20:39.065451 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 1 00:20:39.065583 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:20:39.065753 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:20:39.065886 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 1 00:20:39.066054 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:20:39.066203 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:20:39.066377 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 1 00:20:39.066508 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:20:39.066633 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:20:39.066832 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 1 00:20:39.067029 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:20:39.067219 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:20:39.067404 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 1 00:20:39.067534 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 1 00:20:39.067697 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:20:39.067897 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:20:39.068055 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 1 00:20:39.068227 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 1 00:20:39.068359 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 1 00:20:39.068486 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:20:39.068646 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 1 00:20:39.068855 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 1 00:20:39.069039 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:20:39.069191 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:20:39.069310 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:20:39.069422 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:20:39.069531 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:20:39.069814 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 1 00:20:39.069982 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:20:39.070125 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:20:39.070261 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 00:20:39.070379 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:20:39.070520 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 00:20:39.070750 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:20:39.070904 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 00:20:39.071020 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:20:39.071160 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 00:20:39.071328 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:20:39.071507 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 00:20:39.071643 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:20:39.071834 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 00:20:39.071951 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:20:39.072122 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 1 00:20:39.072244 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 00:20:39.072366 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:20:39.072538 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 1 00:20:39.072660 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 1 00:20:39.072817 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:20:39.072938 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 1 00:20:39.073060 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 00:20:39.073173 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:20:39.073191 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:20:39.073208 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:39.073227 kernel: Initialise system trusted keyrings Nov 1 00:20:39.073247 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:20:39.073266 kernel: Key type asymmetric registered Nov 1 00:20:39.073284 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:39.073309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:39.073328 kernel: io scheduler mq-deadline registered Nov 1 00:20:39.073347 kernel: io scheduler kyber registered Nov 1 00:20:39.073361 kernel: io scheduler bfq registered Nov 1 00:20:39.073505 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 00:20:39.073734 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 00:20:39.073887 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 00:20:39.074024 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 00:20:39.074149 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 00:20:39.074353 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 00:20:39.074499 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 00:20:39.074574 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 00:20:39.074664 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 00:20:39.074797 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 00:20:39.074875 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 00:20:39.074945 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 00:20:39.075015 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 00:20:39.075090 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 00:20:39.075159 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 00:20:39.075229 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 00:20:39.075239 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:20:39.075306 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 1 00:20:39.075375 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 1 00:20:39.075385 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:39.075393 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 1 00:20:39.075403 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:39.075411 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:39.075419 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:20:39.075426 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:20:39.075434 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:20:39.075533 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:20:39.075550 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:20:39.075636 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:20:39.075781 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:20:38 UTC (1761956438) Nov 1 00:20:39.075869 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:20:39.075883 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:20:39.075895 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:39.075907 kernel: Segment Routing with IPv6 Nov 1 00:20:39.075918 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:39.075929 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:39.075941 kernel: Key type dns_resolver registered Nov 1 00:20:39.075952 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:39.075966 kernel: sched_clock: Marking stable (1368012314, 144035661)->(1525417292, -13369317) Nov 1 00:20:39.075976 kernel: registered taskstats version 1 Nov 1 00:20:39.075986 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:39.075994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:39.076001 kernel: Key type .fscrypt registered Nov 1 00:20:39.076008 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:39.076015 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:20:39.076023 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:39.076030 kernel: ima: No architecture policies found Nov 1 00:20:39.076039 kernel: clk: Disabling unused clocks Nov 1 00:20:39.076047 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:39.076054 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:39.076062 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:39.076069 kernel: Run /init as init process Nov 1 00:20:39.076076 kernel: with arguments: Nov 1 00:20:39.076086 kernel: /init Nov 1 00:20:39.076093 kernel: with environment: Nov 1 00:20:39.076100 kernel: HOME=/ Nov 1 00:20:39.076108 kernel: TERM=linux Nov 1 00:20:39.076118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:39.076128 systemd[1]: Detected virtualization kvm. Nov 1 00:20:39.076138 systemd[1]: Detected architecture x86-64. Nov 1 00:20:39.076145 systemd[1]: Running in initrd. Nov 1 00:20:39.076153 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:39.076161 systemd[1]: Hostname set to . Nov 1 00:20:39.076170 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:20:39.076178 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:39.076185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:39.076193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:39.076202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:39.076210 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:39.076218 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:39.076226 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:39.076237 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:39.076244 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:39.076252 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:39.076260 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:39.076268 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:39.076275 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:39.076283 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:39.076291 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:39.076300 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:39.076308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:39.076316 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:39.076324 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:39.076331 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:39.076339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:39.076347 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:39.076355 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:39.076364 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:39.076372 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:39.076380 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:39.076387 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:39.076395 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:39.076403 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:39.076411 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:39.076419 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:39.076447 systemd-journald[188]: Collecting audit messages is disabled. Nov 1 00:20:39.076469 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:39.076477 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:39.076487 systemd-journald[188]: Journal started Nov 1 00:20:39.076505 systemd-journald[188]: Runtime Journal (/run/log/journal/c4ab5d27b12c4965812643e402c78ae2) is 4.8M, max 38.4M, 33.6M free. Nov 1 00:20:39.043950 systemd-modules-load[189]: Inserted module 'overlay' Nov 1 00:20:39.105600 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:39.105622 kernel: Bridge firewalling registered Nov 1 00:20:39.105634 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:39.085298 systemd-modules-load[189]: Inserted module 'br_netfilter' Nov 1 00:20:39.106209 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:39.107118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:39.112815 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:39.114783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:39.117581 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:39.124906 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:39.126112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:39.133650 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:39.135955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:39.137381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:39.147882 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:39.151118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:39.154808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:39.163255 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:39.165699 dracut-cmdline[218]: dracut-dracut-053 Nov 1 00:20:39.168334 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:39.183982 systemd-resolved[219]: Positive Trust Anchors: Nov 1 00:20:39.183994 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:39.184032 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:39.193808 systemd-resolved[219]: Defaulting to hostname 'linux'. Nov 1 00:20:39.194703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:39.195438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:39.229731 kernel: SCSI subsystem initialized Nov 1 00:20:39.238701 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:39.248709 kernel: iscsi: registered transport (tcp) Nov 1 00:20:39.267712 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:39.267779 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:39.296169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:39.301073 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:39.333737 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:39.333809 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:39.336350 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:39.374745 kernel: raid6: avx2x4 gen() 28362 MB/s Nov 1 00:20:39.391726 kernel: raid6: avx2x2 gen() 31937 MB/s Nov 1 00:20:39.408925 kernel: raid6: avx2x1 gen() 26118 MB/s Nov 1 00:20:39.408995 kernel: raid6: using algorithm avx2x2 gen() 31937 MB/s Nov 1 00:20:39.426979 kernel: raid6: .... xor() 20342 MB/s, rmw enabled Nov 1 00:20:39.427057 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:20:39.446744 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:39.639709 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:39.653969 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:39.661939 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:39.674591 systemd-udevd[406]: Using default interface naming scheme 'v255'. Nov 1 00:20:39.678579 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:39.687998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:39.706481 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Nov 1 00:20:39.738431 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:39.745926 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:39.790136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:39.802656 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:39.831971 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:39.834490 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:39.836527 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:39.838348 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:39.844155 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:39.870914 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:39.887725 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:39.895740 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:20:39.912880 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:20:39.937316 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:39.937389 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:39.950257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:39.951013 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:39.952509 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:39.953044 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:39.953200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:39.956555 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:39.965728 kernel: ACPI: bus type USB registered Nov 1 00:20:39.966990 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:39.969483 kernel: usbcore: registered new interface driver usbfs Nov 1 00:20:39.972693 kernel: libata version 3.00 loaded. Nov 1 00:20:39.977523 kernel: usbcore: registered new interface driver hub Nov 1 00:20:39.978711 kernel: usbcore: registered new device driver usb Nov 1 00:20:39.993691 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:20:39.993903 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:20:39.999702 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:20:39.999830 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:20:40.004690 kernel: scsi host1: ahci Nov 1 00:20:40.005710 kernel: scsi host2: ahci Nov 1 00:20:40.007728 kernel: scsi host3: ahci Nov 1 00:20:40.008694 kernel: scsi host4: ahci Nov 1 00:20:40.012692 kernel: scsi host5: ahci Nov 1 00:20:40.019695 kernel: scsi host6: ahci Nov 1 00:20:40.019844 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 Nov 1 00:20:40.019854 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 Nov 1 00:20:40.019863 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 Nov 1 00:20:40.019872 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 Nov 1 00:20:40.019880 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 Nov 1 00:20:40.019889 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 Nov 1 00:20:40.050025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:40.058867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:40.072574 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:40.328695 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:20:40.328776 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:40.331207 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:20:40.331230 kernel: ata1.00: applying bridge limits Nov 1 00:20:40.331674 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:40.335107 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:40.335698 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:40.341706 kernel: ata1.00: configured for UDMA/100 Nov 1 00:20:40.341751 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:20:40.343409 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:20:40.377239 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:20:40.377730 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 1 00:20:40.383847 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:20:40.384110 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:20:40.389376 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 1 00:20:40.389926 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:20:40.390166 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 1 00:20:40.407045 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:20:40.407136 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 00:20:40.407415 kernel: GPT:17805311 != 80003071 Nov 1 00:20:40.407435 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:20:40.407455 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 1 00:20:40.407610 kernel: GPT:17805311 != 80003071 Nov 1 00:20:40.407624 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:20:40.407650 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 1 00:20:40.407845 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:40.407859 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 1 00:20:40.409878 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:20:40.422736 kernel: hub 1-0:1.0: USB hub found Nov 1 00:20:40.429208 kernel: hub 1-0:1.0: 4 ports detected Nov 1 00:20:40.429401 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 00:20:40.430980 kernel: hub 2-0:1.0: USB hub found Nov 1 00:20:40.432224 kernel: hub 2-0:1.0: 4 ports detected Nov 1 00:20:40.443042 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:20:40.443307 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:20:40.456769 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:20:40.480619 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (454) Nov 1 00:20:40.482243 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:20:40.484842 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (462) Nov 1 00:20:40.498017 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:20:40.504573 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:20:40.510128 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:20:40.511449 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:20:40.517851 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:40.523130 disk-uuid[573]: Primary Header is updated. Nov 1 00:20:40.523130 disk-uuid[573]: Secondary Entries is updated. Nov 1 00:20:40.523130 disk-uuid[573]: Secondary Header is updated. Nov 1 00:20:40.534779 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:40.542701 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:40.550712 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:40.667911 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 00:20:40.815745 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:20:40.825659 kernel: usbcore: registered new interface driver usbhid Nov 1 00:20:40.825784 kernel: usbhid: USB HID core driver Nov 1 00:20:40.836700 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 00:20:40.836749 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 1 00:20:41.556749 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:41.557903 disk-uuid[574]: The operation has completed successfully. Nov 1 00:20:41.636492 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:41.636632 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:41.671092 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:41.674974 sh[595]: Success Nov 1 00:20:41.691902 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:20:41.769584 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:41.783851 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:41.786528 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:41.830299 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:41.830385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:41.833843 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:41.839557 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:41.839606 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:41.854727 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:20:41.857761 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:41.859659 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:41.866897 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:41.869429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:41.898744 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:41.898833 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:41.902002 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:41.913353 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:41.913403 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:41.931006 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:41.930585 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:20:41.938713 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:41.944862 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:42.008059 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:42.019844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:42.050369 ignition[713]: Ignition 2.19.0 Nov 1 00:20:42.050381 ignition[713]: Stage: fetch-offline Nov 1 00:20:42.050415 ignition[713]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.050423 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.050512 ignition[713]: parsed url from cmdline: "" Nov 1 00:20:42.054644 systemd-networkd[776]: lo: Link UP Nov 1 00:20:42.050515 ignition[713]: no config URL provided Nov 1 00:20:42.054648 systemd-networkd[776]: lo: Gained carrier Nov 1 00:20:42.050519 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:42.055380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:42.050526 ignition[713]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:42.056343 systemd-networkd[776]: Enumeration completed Nov 1 00:20:42.050530 ignition[713]: failed to fetch config: resource requires networking Nov 1 00:20:42.056771 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:42.050738 ignition[713]: Ignition finished successfully Nov 1 00:20:42.056774 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:42.056871 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:42.057398 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:42.057402 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:42.057928 systemd-networkd[776]: eth0: Link UP Nov 1 00:20:42.057931 systemd-networkd[776]: eth0: Gained carrier Nov 1 00:20:42.057936 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:42.059828 systemd[1]: Reached target network.target - Network. Nov 1 00:20:42.062875 systemd-networkd[776]: eth1: Link UP Nov 1 00:20:42.062879 systemd-networkd[776]: eth1: Gained carrier Nov 1 00:20:42.062884 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:42.070835 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:20:42.083204 ignition[783]: Ignition 2.19.0 Nov 1 00:20:42.083216 ignition[783]: Stage: fetch Nov 1 00:20:42.083368 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.083378 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.083463 ignition[783]: parsed url from cmdline: "" Nov 1 00:20:42.083467 ignition[783]: no config URL provided Nov 1 00:20:42.083471 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:42.083478 ignition[783]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:42.083494 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 1 00:20:42.083627 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:20:42.111761 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 1 00:20:42.122796 systemd-networkd[776]: eth0: DHCPv4 address 95.217.181.13/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 1 00:20:42.284423 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 1 00:20:42.295046 ignition[783]: GET result: OK Nov 1 00:20:42.295197 ignition[783]: parsing config with SHA512: 6ca00fa70dd625948f1a7b5368f9aeb947169ac53225e98450115b97faf60f08f170a47c3f9e58356403fe0177c70e588fdcaac6f33b7b46c278cac667aaef36 Nov 1 00:20:42.302306 unknown[783]: fetched base config from "system" Nov 1 00:20:42.302441 unknown[783]: fetched base config from "system" Nov 1 00:20:42.302459 unknown[783]: fetched user config from "hetzner" Nov 1 00:20:42.306377 ignition[783]: fetch: fetch complete Nov 1 00:20:42.306387 ignition[783]: fetch: fetch passed Nov 1 00:20:42.306476 ignition[783]: Ignition finished successfully Nov 1 00:20:42.311414 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:20:42.320962 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:42.346412 ignition[791]: Ignition 2.19.0 Nov 1 00:20:42.346425 ignition[791]: Stage: kargs Nov 1 00:20:42.346660 ignition[791]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.349556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:42.346718 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.347898 ignition[791]: kargs: kargs passed Nov 1 00:20:42.347952 ignition[791]: Ignition finished successfully Nov 1 00:20:42.358900 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:42.379891 ignition[798]: Ignition 2.19.0 Nov 1 00:20:42.379909 ignition[798]: Stage: disks Nov 1 00:20:42.387212 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:42.380235 ignition[798]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.389268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:42.380253 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.390636 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:42.382068 ignition[798]: disks: disks passed Nov 1 00:20:42.392475 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:42.382147 ignition[798]: Ignition finished successfully Nov 1 00:20:42.394569 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:42.396740 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:42.404815 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:42.421712 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:20:42.426843 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:42.432836 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:42.534704 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:42.535967 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:42.537926 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:42.544825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:42.547455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:42.551017 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:20:42.553950 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:42.553975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:42.568478 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (814) Nov 1 00:20:42.568504 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:42.568514 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:42.568523 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:42.559886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:42.574863 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:42.584308 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:42.584342 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:42.587012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:42.639170 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:42.640059 coreos-metadata[816]: Nov 01 00:20:42.638 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 1 00:20:42.640961 coreos-metadata[816]: Nov 01 00:20:42.640 INFO Fetch successful Nov 1 00:20:42.640961 coreos-metadata[816]: Nov 01 00:20:42.640 INFO wrote hostname ci-4081-3-6-n-a2a464dc28 to /sysroot/etc/hostname Nov 1 00:20:42.644786 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:20:42.646710 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:42.650480 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:42.653238 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:42.727311 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:42.733765 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:42.738846 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:42.740719 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:42.776641 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:42.779846 ignition[930]: INFO : Ignition 2.19.0 Nov 1 00:20:42.779846 ignition[930]: INFO : Stage: mount Nov 1 00:20:42.780861 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.780861 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.782036 ignition[930]: INFO : mount: mount passed Nov 1 00:20:42.782036 ignition[930]: INFO : Ignition finished successfully Nov 1 00:20:42.782616 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:42.788825 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:42.826235 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:42.833018 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:42.855727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Nov 1 00:20:42.858719 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:42.858760 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:42.862774 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:42.869892 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:20:42.869949 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:42.875256 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:42.894699 ignition[960]: INFO : Ignition 2.19.0 Nov 1 00:20:42.895420 ignition[960]: INFO : Stage: files Nov 1 00:20:42.896021 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:42.896626 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:42.898043 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:42.900136 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:42.900841 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:42.906492 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:42.907225 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:42.908131 unknown[960]: wrote ssh authorized keys file for user: core Nov 1 00:20:42.908877 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:42.910363 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:20:42.911302 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:43.111004 systemd-networkd[776]: eth1: Gained IPv6LL Nov 1 00:20:43.167554 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:20:43.468982 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:20:43.468982 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:43.473534 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:20:43.878105 systemd-networkd[776]: eth0: Gained IPv6LL Nov 1 00:20:43.914207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:20:44.186467 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:20:44.186467 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:20:44.192152 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:44.192152 ignition[960]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:20:44.192152 ignition[960]: INFO : files: files passed Nov 1 00:20:44.192152 ignition[960]: INFO : Ignition finished successfully Nov 1 00:20:44.192042 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:20:44.200802 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:20:44.203906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:20:44.209779 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:20:44.224809 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:44.224809 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:44.209894 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:20:44.228416 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:20:44.220084 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:44.221351 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:20:44.230760 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:20:44.264441 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:20:44.264536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:20:44.266037 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:20:44.267545 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:20:44.268992 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:20:44.277979 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:20:44.288760 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:44.295887 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:20:44.307925 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:44.310132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:44.311180 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:20:44.313041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:20:44.313258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:20:44.315250 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:20:44.316395 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:20:44.318135 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:20:44.319601 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:44.321084 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:44.322965 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:20:44.324501 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:44.326222 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:20:44.328031 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:20:44.329879 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:20:44.331368 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:20:44.331516 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:44.333305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:44.334311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:44.335709 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:20:44.336562 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:44.338242 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:20:44.338375 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:44.340889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:20:44.341038 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:20:44.342069 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:20:44.342237 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:20:44.343632 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:20:44.343800 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:20:44.352219 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:20:44.355939 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:20:44.356648 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:20:44.356874 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:44.360220 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:20:44.360474 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:44.368004 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:20:44.368117 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:20:44.375189 ignition[1013]: INFO : Ignition 2.19.0 Nov 1 00:20:44.375189 ignition[1013]: INFO : Stage: umount Nov 1 00:20:44.378216 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:44.378216 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:20:44.378216 ignition[1013]: INFO : umount: umount passed Nov 1 00:20:44.378216 ignition[1013]: INFO : Ignition finished successfully Nov 1 00:20:44.378903 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:20:44.379007 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:20:44.380073 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:20:44.380125 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:20:44.382752 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:20:44.382784 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:20:44.398087 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:20:44.398125 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:20:44.399330 systemd[1]: Stopped target network.target - Network. Nov 1 00:20:44.400553 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:20:44.400590 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:44.401998 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:20:44.403204 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:20:44.403238 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:44.404516 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:20:44.414303 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:20:44.415271 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:20:44.415310 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:44.416531 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:20:44.416574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:44.418376 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:20:44.418426 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:20:44.419497 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:20:44.419541 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:44.420716 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:20:44.421872 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:20:44.426412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:20:44.426705 systemd-networkd[776]: eth0: DHCPv6 lease lost Nov 1 00:20:44.427111 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:20:44.427219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:20:44.433734 systemd-networkd[776]: eth1: DHCPv6 lease lost Nov 1 00:20:44.435857 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:20:44.435913 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:44.440059 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:20:44.440189 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:20:44.441832 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:20:44.441942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:20:44.444201 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:20:44.444260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:44.445569 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:20:44.445632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:44.451815 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:20:44.453019 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:20:44.453087 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:44.454905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:20:44.454961 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:44.456787 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:20:44.456844 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:44.458291 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:44.466718 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:20:44.466871 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:20:44.468474 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:20:44.468631 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:44.470504 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:20:44.470572 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:44.472020 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:20:44.472063 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:44.473342 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:20:44.473398 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:44.475167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:20:44.475220 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:44.476425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:44.476478 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:44.483806 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:20:44.485048 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:20:44.485098 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:44.486850 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:20:44.486883 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:44.488677 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:20:44.488719 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:44.489704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:44.489735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:44.491224 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:20:44.491326 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:20:44.493000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:20:44.499780 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:20:44.506893 systemd[1]: Switching root. Nov 1 00:20:44.557716 systemd-journald[188]: Received SIGTERM from PID 1 (systemd). Nov 1 00:20:44.557806 systemd-journald[188]: Journal stopped Nov 1 00:20:45.556657 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:20:45.556741 kernel: SELinux: policy capability open_perms=1 Nov 1 00:20:45.556753 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:20:45.556762 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:20:45.556773 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:20:45.556785 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:20:45.556794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:20:45.556803 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:20:45.556814 kernel: audit: type=1403 audit(1761956444.735:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:20:45.556825 systemd[1]: Successfully loaded SELinux policy in 56.968ms. Nov 1 00:20:45.556841 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.288ms. Nov 1 00:20:45.556852 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:45.556861 systemd[1]: Detected virtualization kvm. Nov 1 00:20:45.556871 systemd[1]: Detected architecture x86-64. Nov 1 00:20:45.556880 systemd[1]: Detected first boot. Nov 1 00:20:45.556889 systemd[1]: Hostname set to . Nov 1 00:20:45.556899 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:20:45.556908 zram_generator::config[1055]: No configuration found. Nov 1 00:20:45.556924 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:20:45.556935 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:20:45.556947 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:20:45.556957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:20:45.556969 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:20:45.556979 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:20:45.556988 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:20:45.556997 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:20:45.557007 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:20:45.557018 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:20:45.557028 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:20:45.557037 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:20:45.557046 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:45.557056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:45.557065 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:20:45.557075 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:20:45.557084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:20:45.557095 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:45.557105 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:20:45.557114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:45.557124 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:20:45.557133 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:20:45.557143 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:45.557165 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:20:45.557176 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:45.557186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:45.557196 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:45.557205 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:45.557215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:20:45.557224 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:20:45.557233 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:45.557243 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:45.557252 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:45.557263 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:20:45.557273 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:20:45.557282 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:20:45.557292 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:20:45.557302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:45.557311 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:20:45.557321 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:20:45.557334 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:20:45.557346 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:20:45.557356 systemd[1]: Reached target machines.target - Containers. Nov 1 00:20:45.557366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:20:45.557376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:45.557386 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:45.557395 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:20:45.557406 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:45.557415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:45.557425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:45.557434 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:20:45.557444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:45.557453 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:20:45.557463 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:20:45.557472 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:20:45.557483 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:20:45.557493 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:20:45.557503 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:45.557512 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:45.557521 kernel: ACPI: bus type drm_connector registered Nov 1 00:20:45.557530 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:20:45.557540 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:20:45.557549 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:45.557559 kernel: fuse: init (API version 7.39) Nov 1 00:20:45.557570 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:20:45.557580 kernel: loop: module loaded Nov 1 00:20:45.557589 systemd[1]: Stopped verity-setup.service. Nov 1 00:20:45.557599 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:45.557620 systemd-journald[1138]: Collecting audit messages is disabled. Nov 1 00:20:45.557641 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:20:45.557653 systemd-journald[1138]: Journal started Nov 1 00:20:45.557696 systemd-journald[1138]: Runtime Journal (/run/log/journal/c4ab5d27b12c4965812643e402c78ae2) is 4.8M, max 38.4M, 33.6M free. Nov 1 00:20:45.216546 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:20:45.234584 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:20:45.234984 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:20:45.561675 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:45.563106 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:20:45.563706 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:20:45.564235 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:20:45.564837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:20:45.565398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:20:45.566087 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:20:45.566808 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:45.567571 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:20:45.567761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:20:45.568442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:45.568542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:45.569196 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:45.569291 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:45.570043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:45.570189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:45.571028 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:20:45.571175 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:20:45.571944 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:45.572091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:45.572843 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:45.573509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:20:45.574278 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:20:45.581087 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:20:45.587759 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:20:45.590594 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:20:45.591097 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:20:45.591119 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:45.592385 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:20:45.595756 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:20:45.598190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:20:45.599789 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:45.603116 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:20:45.608122 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:20:45.608637 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:45.611371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:20:45.611917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:45.615897 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:45.617747 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:20:45.619773 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:45.623621 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:20:45.634822 systemd-journald[1138]: Time spent on flushing to /var/log/journal/c4ab5d27b12c4965812643e402c78ae2 is 74.410ms for 1130 entries. Nov 1 00:20:45.634822 systemd-journald[1138]: System Journal (/var/log/journal/c4ab5d27b12c4965812643e402c78ae2) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:20:45.751244 systemd-journald[1138]: Received client request to flush runtime journal. Nov 1 00:20:45.751284 kernel: loop0: detected capacity change from 0 to 8 Nov 1 00:20:45.751303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:20:45.751317 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:20:45.637890 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:20:45.638812 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:20:45.644904 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:20:45.645827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:20:45.659052 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:20:45.661704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:45.730913 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:45.737122 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Nov 1 00:20:45.737135 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Nov 1 00:20:45.743871 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:20:45.746466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:45.748190 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:20:45.759726 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:20:45.768069 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:20:45.768851 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:20:45.771796 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:20:45.788756 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:20:45.792772 kernel: loop2: detected capacity change from 0 to 142488 Nov 1 00:20:45.796821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:45.822326 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Nov 1 00:20:45.822643 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Nov 1 00:20:45.826401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:45.847713 kernel: loop3: detected capacity change from 0 to 140768 Nov 1 00:20:45.898702 kernel: loop4: detected capacity change from 0 to 8 Nov 1 00:20:45.904861 kernel: loop5: detected capacity change from 0 to 224512 Nov 1 00:20:45.927700 kernel: loop6: detected capacity change from 0 to 142488 Nov 1 00:20:45.951817 kernel: loop7: detected capacity change from 0 to 140768 Nov 1 00:20:45.977240 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 1 00:20:45.977720 (sd-merge)[1204]: Merged extensions into '/usr'. Nov 1 00:20:45.984045 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:20:45.984212 systemd[1]: Reloading... Nov 1 00:20:46.051717 zram_generator::config[1227]: No configuration found. Nov 1 00:20:46.147132 ldconfig[1170]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:20:46.186790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:46.237062 systemd[1]: Reloading finished in 251 ms. Nov 1 00:20:46.262114 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:20:46.262887 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:20:46.271905 systemd[1]: Starting ensure-sysext.service... Nov 1 00:20:46.275384 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:46.278019 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:20:46.281952 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:46.286583 systemd[1]: Reloading requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:20:46.286594 systemd[1]: Reloading... Nov 1 00:20:46.292263 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:20:46.292638 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:20:46.293705 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:20:46.293989 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 1 00:20:46.294045 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 1 00:20:46.297496 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:46.297502 systemd-tmpfiles[1274]: Skipping /boot Nov 1 00:20:46.308123 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:20:46.309823 systemd-tmpfiles[1274]: Skipping /boot Nov 1 00:20:46.322416 systemd-udevd[1276]: Using default interface naming scheme 'v255'. Nov 1 00:20:46.350760 zram_generator::config[1305]: No configuration found. Nov 1 00:20:46.479700 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:20:46.494715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1308) Nov 1 00:20:46.503692 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 00:20:46.516623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:20:46.524696 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:20:46.586699 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:20:46.591932 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:20:46.592160 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:20:46.595696 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:20:46.597421 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:20:46.597479 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:20:46.598296 systemd[1]: Reloading finished in 311 ms. Nov 1 00:20:46.610712 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:20:46.619131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:46.619971 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:46.650703 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 1 00:20:46.650748 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 1 00:20:46.655702 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:20:46.655735 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 1 00:20:46.655749 kernel: [drm] features: -context_init Nov 1 00:20:46.657690 kernel: [drm] number of scanouts: 1 Nov 1 00:20:46.657735 kernel: [drm] number of cap sets: 0 Nov 1 00:20:46.660694 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 1 00:20:46.660235 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 1 00:20:46.672530 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 1 00:20:46.672614 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 00:20:46.672293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.675711 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 1 00:20:46.681165 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:20:46.684104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:20:46.685651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:46.688882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:46.690881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:46.694968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:46.695186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:46.697767 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:20:46.700512 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:20:46.705891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:46.709200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:46.717855 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:20:46.719720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.726118 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:46.726263 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:46.727118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:46.727384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:46.728408 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:46.729461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:46.735144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:20:46.751844 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.752120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:46.760808 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:46.762845 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:46.769128 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:46.769279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:46.773408 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:20:46.776593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:46.777207 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.782154 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:20:46.782845 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:20:46.783400 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:46.783495 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:46.785073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:46.785452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:46.789225 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:46.789855 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:46.805792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.806518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:20:46.811348 augenrules[1426]: No rules Nov 1 00:20:46.813142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:20:46.819878 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:20:46.831007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:20:46.838933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:20:46.839420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:20:46.842983 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:20:46.843383 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:20:46.844269 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:20:46.848164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:20:46.848280 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:20:46.856577 systemd[1]: Finished ensure-sysext.service. Nov 1 00:20:46.860140 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:20:46.862344 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:20:46.866802 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:20:46.867731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:20:46.870071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:20:46.871076 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:20:46.879963 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:20:46.880103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:20:46.887745 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:20:46.899234 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:20:46.899395 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:20:46.909334 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:20:46.911992 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:20:46.916833 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:46.916994 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:46.927077 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:46.933001 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:20:46.949146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:20:46.975568 systemd-resolved[1393]: Positive Trust Anchors: Nov 1 00:20:46.977983 systemd-resolved[1393]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:46.978015 systemd-resolved[1393]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:46.989990 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:46.983973 systemd-networkd[1391]: lo: Link UP Nov 1 00:20:46.983976 systemd-networkd[1391]: lo: Gained carrier Nov 1 00:20:46.985549 systemd-networkd[1391]: Enumeration completed Nov 1 00:20:46.985735 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:46.994087 systemd-resolved[1393]: Using system hostname 'ci-4081-3-6-n-a2a464dc28'. Nov 1 00:20:46.995565 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:46.995620 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:46.995805 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:20:46.997593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:47.000466 systemd[1]: Reached target network.target - Network. Nov 1 00:20:47.002103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:47.003276 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:47.003329 systemd-networkd[1391]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:47.003901 systemd-networkd[1391]: eth0: Link UP Nov 1 00:20:47.003952 systemd-networkd[1391]: eth0: Gained carrier Nov 1 00:20:47.003999 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:47.007517 systemd-networkd[1391]: eth1: Link UP Nov 1 00:20:47.007914 systemd-networkd[1391]: eth1: Gained carrier Nov 1 00:20:47.007975 systemd-networkd[1391]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:47.011254 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:47.014882 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:20:47.015373 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:20:47.047722 systemd-networkd[1391]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 1 00:20:47.048885 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Nov 1 00:20:47.049469 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:20:47.052027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:47.052452 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:47.055012 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:20:47.057151 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:20:47.059401 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:20:47.061593 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:20:47.063426 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:20:47.065469 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:20:47.065566 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:47.067423 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:47.069865 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:20:47.070996 systemd-networkd[1391]: eth0: DHCPv4 address 95.217.181.13/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 1 00:20:47.072793 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Nov 1 00:20:47.073320 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Nov 1 00:20:47.075799 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:20:47.083634 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:20:47.089430 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:20:47.094271 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:20:47.097296 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:47.099968 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:47.102443 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:47.102607 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:20:47.106762 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:20:47.109562 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:20:47.116060 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:20:47.123523 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:20:47.126772 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:20:47.129925 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:20:47.134084 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:20:47.138853 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:20:47.147762 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:20:47.155932 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 1 00:20:47.162723 coreos-metadata[1470]: Nov 01 00:20:47.161 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 1 00:20:47.168858 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:20:47.172849 extend-filesystems[1475]: Found loop4 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found loop5 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found loop6 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found loop7 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda1 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda2 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda3 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found usr Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda4 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda6 Nov 1 00:20:47.172849 extend-filesystems[1475]: Found sda7 Nov 1 00:20:47.229236 extend-filesystems[1475]: Found sda9 Nov 1 00:20:47.229236 extend-filesystems[1475]: Checking size of /dev/sda9 Nov 1 00:20:47.229236 extend-filesystems[1475]: Resized partition /dev/sda9 Nov 1 00:20:47.250560 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 1 00:20:47.250584 coreos-metadata[1470]: Nov 01 00:20:47.178 INFO Fetch successful Nov 1 00:20:47.250584 coreos-metadata[1470]: Nov 01 00:20:47.178 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 1 00:20:47.250584 coreos-metadata[1470]: Nov 01 00:20:47.178 INFO Fetch successful Nov 1 00:20:47.174954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:20:47.195270 dbus-daemon[1471]: [system] SELinux support is enabled Nov 1 00:20:47.251576 jq[1474]: false Nov 1 00:20:47.251925 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:20:47.188805 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:20:47.199628 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:20:47.203734 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:20:47.210592 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:20:47.220384 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:20:47.230352 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:20:47.240079 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:20:47.262036 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:20:47.262875 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:20:47.263103 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:20:47.263219 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:20:47.276141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:20:47.276404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:20:47.282402 jq[1497]: true Nov 1 00:20:47.285573 update_engine[1492]: I20251101 00:20:47.282644 1492 main.cc:92] Flatcar Update Engine starting Nov 1 00:20:47.289010 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:20:47.290832 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:20:47.293251 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:20:47.293269 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:20:47.310758 jq[1515]: true Nov 1 00:20:47.318338 update_engine[1492]: I20251101 00:20:47.315946 1492 update_check_scheduler.cc:74] Next update check in 6m35s Nov 1 00:20:47.321102 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:20:47.326394 tar[1504]: linux-amd64/LICENSE Nov 1 00:20:47.340290 tar[1504]: linux-amd64/helm Nov 1 00:20:47.334615 systemd-logind[1487]: New seat seat0. Nov 1 00:20:47.336372 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 00:20:47.336385 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:20:47.336839 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:20:47.339340 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:20:47.346596 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1325) Nov 1 00:20:47.350083 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:20:47.451479 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:20:47.453656 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:20:47.469778 bash[1537]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:47.472949 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:20:47.490572 systemd[1]: Starting sshkeys.service... Nov 1 00:20:47.515943 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:20:47.524151 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:20:47.545232 coreos-metadata[1550]: Nov 01 00:20:47.545 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 1 00:20:47.547079 coreos-metadata[1550]: Nov 01 00:20:47.546 INFO Fetch successful Nov 1 00:20:47.551178 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:20:47.565646 unknown[1550]: wrote ssh authorized keys file for user: core Nov 1 00:20:47.574814 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 1 00:20:47.614974 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:20:47.614974 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 1 00:20:47.614974 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 1 00:20:47.624554 extend-filesystems[1475]: Resized filesystem in /dev/sda9 Nov 1 00:20:47.624554 extend-filesystems[1475]: Found sr0 Nov 1 00:20:47.619106 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:20:47.631718 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:20:47.619293 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:20:47.629921 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:20:47.635740 systemd[1]: Finished sshkeys.service. Nov 1 00:20:47.672955 containerd[1505]: time="2025-11-01T00:20:47.672871106Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:20:47.744711 containerd[1505]: time="2025-11-01T00:20:47.744632422Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748407 containerd[1505]: time="2025-11-01T00:20:47.748371965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748407 containerd[1505]: time="2025-11-01T00:20:47.748404475Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:20:47.748484 containerd[1505]: time="2025-11-01T00:20:47.748419073Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:20:47.748739 containerd[1505]: time="2025-11-01T00:20:47.748556400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:20:47.748739 containerd[1505]: time="2025-11-01T00:20:47.748574574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748739 containerd[1505]: time="2025-11-01T00:20:47.748636030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748739 containerd[1505]: time="2025-11-01T00:20:47.748647130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748864 containerd[1505]: time="2025-11-01T00:20:47.748840403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748864 containerd[1505]: time="2025-11-01T00:20:47.748862645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748902 containerd[1505]: time="2025-11-01T00:20:47.748875318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:47.748902 containerd[1505]: time="2025-11-01T00:20:47.748885738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.749152 containerd[1505]: time="2025-11-01T00:20:47.748946251Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.749152 containerd[1505]: time="2025-11-01T00:20:47.749114868Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:20:47.749238 containerd[1505]: time="2025-11-01T00:20:47.749211740Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:20:47.749263 containerd[1505]: time="2025-11-01T00:20:47.749241035Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:20:47.749347 containerd[1505]: time="2025-11-01T00:20:47.749325794Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:20:47.749431 containerd[1505]: time="2025-11-01T00:20:47.749372350Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:20:47.756902 containerd[1505]: time="2025-11-01T00:20:47.756772057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:20:47.756902 containerd[1505]: time="2025-11-01T00:20:47.756832811Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:20:47.756902 containerd[1505]: time="2025-11-01T00:20:47.756847879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:20:47.756902 containerd[1505]: time="2025-11-01T00:20:47.756862747Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:20:47.756902 containerd[1505]: time="2025-11-01T00:20:47.756875520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:20:47.757006 containerd[1505]: time="2025-11-01T00:20:47.756985066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757287483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757448525Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757462612Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757473902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757485915Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757499401Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757511894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757525189Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757538283Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757550827Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757562038Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757572908Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757592585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759687 containerd[1505]: time="2025-11-01T00:20:47.757613615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757624816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757636167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757647067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757658579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757691882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757704294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757715746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757730363Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757740393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757750692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757761963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757776160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757793813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757803671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.759951 containerd[1505]: time="2025-11-01T00:20:47.757814291Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757851851Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757867460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757876929Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757890734Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757899761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757911142Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757919949Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:20:47.760175 containerd[1505]: time="2025-11-01T00:20:47.757928555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:20:47.760303 containerd[1505]: time="2025-11-01T00:20:47.758186068Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:20:47.760303 containerd[1505]: time="2025-11-01T00:20:47.758236743Z" level=info msg="Connect containerd service" Nov 1 00:20:47.760303 containerd[1505]: time="2025-11-01T00:20:47.758269494Z" level=info msg="using legacy CRI server" Nov 1 00:20:47.760303 containerd[1505]: time="2025-11-01T00:20:47.758274815Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:20:47.760303 containerd[1505]: time="2025-11-01T00:20:47.758359333Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.762874870Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.762985538Z" level=info msg="Start subscribing containerd event" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763022738Z" level=info msg="Start recovering state" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763067111Z" level=info msg="Start event monitor" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763083021Z" level=info msg="Start snapshots syncer" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763091507Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763098821Z" level=info msg="Start streaming server" Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763481448Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:20:47.763896 containerd[1505]: time="2025-11-01T00:20:47.763523386Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:20:47.769828 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:20:47.772526 containerd[1505]: time="2025-11-01T00:20:47.772194176Z" level=info msg="containerd successfully booted in 0.101034s" Nov 1 00:20:47.837729 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:20:47.858296 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:20:47.867868 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:20:47.873746 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:20:47.873895 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:20:47.885602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:20:47.895414 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:20:47.907205 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:20:47.911606 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:20:47.913217 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:20:48.037459 tar[1504]: linux-amd64/README.md Nov 1 00:20:48.048828 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:20:48.357937 systemd-networkd[1391]: eth0: Gained IPv6LL Nov 1 00:20:48.359007 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Nov 1 00:20:48.363409 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:20:48.366274 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:20:48.377055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:20:48.383367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:20:48.415419 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:20:48.614061 systemd-networkd[1391]: eth1: Gained IPv6LL Nov 1 00:20:48.614585 systemd-timesyncd[1452]: Network configuration changed, trying to establish connection. Nov 1 00:20:49.655139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:20:49.659391 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:20:49.659600 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:20:49.664277 systemd[1]: Startup finished in 1.539s (kernel) + 5.973s (initrd) + 4.984s (userspace) = 12.496s. Nov 1 00:20:50.394533 kubelet[1601]: E1101 00:20:50.394444 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:20:50.398640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:20:50.398797 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:20:50.399249 systemd[1]: kubelet.service: Consumed 1.388s CPU time. Nov 1 00:21:00.650156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:21:00.657407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:00.766409 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:00.769410 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:00.809111 kubelet[1621]: E1101 00:21:00.809026 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:00.815603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:00.815762 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:11.066731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:21:11.077044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:11.229089 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:11.232400 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:11.300864 kubelet[1636]: E1101 00:21:11.300768 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:11.304547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:11.304878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:18.937589 systemd-timesyncd[1452]: Contacted time server 172.104.149.161:123 (2.flatcar.pool.ntp.org). Nov 1 00:21:18.937727 systemd-timesyncd[1452]: Initial clock synchronization to Sat 2025-11-01 00:21:19.312249 UTC. Nov 1 00:21:21.322944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:21:21.334055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:21.497476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:21.512127 (kubelet)[1651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:21.572171 kubelet[1651]: E1101 00:21:21.572088 1651 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:21.575238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:21.575442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:31.822830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:21:31.834039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:31.974229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:31.977406 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:32.022886 kubelet[1666]: E1101 00:21:32.022813 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:32.025884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:32.025999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:32.213870 update_engine[1492]: I20251101 00:21:32.213606 1492 update_attempter.cc:509] Updating boot flags... Nov 1 00:21:32.273797 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1682) Nov 1 00:21:32.341823 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1678) Nov 1 00:21:32.384727 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1678) Nov 1 00:21:34.032190 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:21:34.038578 systemd[1]: Started sshd@0-95.217.181.13:22-147.75.109.163:37654.service - OpenSSH per-connection server daemon (147.75.109.163:37654). Nov 1 00:21:35.071206 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 37654 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:35.074236 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:35.091266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:21:35.100262 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:21:35.105408 systemd-logind[1487]: New session 1 of user core. Nov 1 00:21:35.126175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:21:35.135825 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:21:35.151128 (systemd)[1699]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:35.312835 systemd[1699]: Queued start job for default target default.target. Nov 1 00:21:35.324440 systemd[1699]: Created slice app.slice - User Application Slice. Nov 1 00:21:35.324464 systemd[1699]: Reached target paths.target - Paths. Nov 1 00:21:35.324475 systemd[1699]: Reached target timers.target - Timers. Nov 1 00:21:35.325509 systemd[1699]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:21:35.334603 systemd[1699]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:21:35.334661 systemd[1699]: Reached target sockets.target - Sockets. Nov 1 00:21:35.334987 systemd[1699]: Reached target basic.target - Basic System. Nov 1 00:21:35.335035 systemd[1699]: Reached target default.target - Main User Target. Nov 1 00:21:35.335063 systemd[1699]: Startup finished in 173ms. Nov 1 00:21:35.335088 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:21:35.342781 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:21:36.085174 systemd[1]: Started sshd@1-95.217.181.13:22-147.75.109.163:37658.service - OpenSSH per-connection server daemon (147.75.109.163:37658). Nov 1 00:21:37.209220 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 37658 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:37.211452 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:37.218524 systemd-logind[1487]: New session 2 of user core. Nov 1 00:21:37.224946 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:21:37.977827 sshd[1710]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:37.981953 systemd[1]: sshd@1-95.217.181.13:22-147.75.109.163:37658.service: Deactivated successfully. Nov 1 00:21:37.985291 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:21:37.987887 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:21:37.989846 systemd-logind[1487]: Removed session 2. Nov 1 00:21:38.183128 systemd[1]: Started sshd@2-95.217.181.13:22-147.75.109.163:37662.service - OpenSSH per-connection server daemon (147.75.109.163:37662). Nov 1 00:21:39.324592 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 37662 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:39.327059 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:39.334619 systemd-logind[1487]: New session 3 of user core. Nov 1 00:21:39.339903 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:21:40.098974 sshd[1717]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:40.103023 systemd[1]: sshd@2-95.217.181.13:22-147.75.109.163:37662.service: Deactivated successfully. Nov 1 00:21:40.106091 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:21:40.108153 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:21:40.110028 systemd-logind[1487]: Removed session 3. Nov 1 00:21:40.292053 systemd[1]: Started sshd@3-95.217.181.13:22-147.75.109.163:36794.service - OpenSSH per-connection server daemon (147.75.109.163:36794). Nov 1 00:21:41.414486 sshd[1724]: Accepted publickey for core from 147.75.109.163 port 36794 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:41.417097 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:41.425302 systemd-logind[1487]: New session 4 of user core. Nov 1 00:21:41.431960 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:21:42.072945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:21:42.084101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:42.183290 sshd[1724]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:42.188448 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:21:42.189362 systemd[1]: sshd@3-95.217.181.13:22-147.75.109.163:36794.service: Deactivated successfully. Nov 1 00:21:42.193091 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:21:42.199068 systemd-logind[1487]: Removed session 4. Nov 1 00:21:42.260830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:42.275034 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:42.325829 kubelet[1738]: E1101 00:21:42.325590 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:42.341851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:42.342016 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:42.349108 systemd[1]: Started sshd@4-95.217.181.13:22-147.75.109.163:36804.service - OpenSSH per-connection server daemon (147.75.109.163:36804). Nov 1 00:21:43.380257 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 36804 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:43.382326 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:43.390511 systemd-logind[1487]: New session 5 of user core. Nov 1 00:21:43.399984 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:21:43.931900 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:21:43.932354 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:43.956240 sudo[1749]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:44.121896 sshd[1746]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:44.125724 systemd[1]: sshd@4-95.217.181.13:22-147.75.109.163:36804.service: Deactivated successfully. Nov 1 00:21:44.128032 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:21:44.129850 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:21:44.131488 systemd-logind[1487]: Removed session 5. Nov 1 00:21:44.301134 systemd[1]: Started sshd@5-95.217.181.13:22-147.75.109.163:36816.service - OpenSSH per-connection server daemon (147.75.109.163:36816). Nov 1 00:21:45.316611 sshd[1754]: Accepted publickey for core from 147.75.109.163 port 36816 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:45.318891 sshd[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:45.327763 systemd-logind[1487]: New session 6 of user core. Nov 1 00:21:45.336905 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:21:45.850891 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:21:45.851383 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:45.858464 sudo[1758]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:45.870582 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:21:45.871343 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:45.894139 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:45.899732 auditctl[1761]: No rules Nov 1 00:21:45.900867 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:21:45.901237 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:45.904736 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:45.945820 augenrules[1779]: No rules Nov 1 00:21:45.946880 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:45.948578 sudo[1757]: pam_unix(sudo:session): session closed for user root Nov 1 00:21:46.111377 sshd[1754]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:46.116424 systemd[1]: sshd@5-95.217.181.13:22-147.75.109.163:36816.service: Deactivated successfully. Nov 1 00:21:46.119341 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:21:46.122036 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:21:46.124238 systemd-logind[1487]: Removed session 6. Nov 1 00:21:46.294499 systemd[1]: Started sshd@6-95.217.181.13:22-147.75.109.163:36818.service - OpenSSH per-connection server daemon (147.75.109.163:36818). Nov 1 00:21:47.307304 sshd[1787]: Accepted publickey for core from 147.75.109.163 port 36818 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:21:47.309738 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:47.317764 systemd-logind[1487]: New session 7 of user core. Nov 1 00:21:47.330953 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:21:47.843687 sudo[1790]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:21:47.844018 sudo[1790]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:21:48.218083 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:21:48.218905 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:21:48.608842 dockerd[1806]: time="2025-11-01T00:21:48.608745131Z" level=info msg="Starting up" Nov 1 00:21:48.732340 dockerd[1806]: time="2025-11-01T00:21:48.732250120Z" level=info msg="Loading containers: start." Nov 1 00:21:48.895151 kernel: Initializing XFRM netlink socket Nov 1 00:21:48.986767 systemd-networkd[1391]: docker0: Link UP Nov 1 00:21:49.003692 dockerd[1806]: time="2025-11-01T00:21:49.003620891Z" level=info msg="Loading containers: done." Nov 1 00:21:49.019186 dockerd[1806]: time="2025-11-01T00:21:49.019128798Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:21:49.019330 dockerd[1806]: time="2025-11-01T00:21:49.019247341Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:21:49.019373 dockerd[1806]: time="2025-11-01T00:21:49.019341768Z" level=info msg="Daemon has completed initialization" Nov 1 00:21:49.057941 dockerd[1806]: time="2025-11-01T00:21:49.057855201Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:21:49.058154 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:21:50.617418 containerd[1505]: time="2025-11-01T00:21:50.617303127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:21:51.280428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872850024.mount: Deactivated successfully. Nov 1 00:21:52.572113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 1 00:21:52.576860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:52.678264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:52.680703 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:52.717691 kubelet[2011]: E1101 00:21:52.716984 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:52.719954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:52.720091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:52.900065 containerd[1505]: time="2025-11-01T00:21:52.899924776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:52.901456 containerd[1505]: time="2025-11-01T00:21:52.901259069Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Nov 1 00:21:52.902742 containerd[1505]: time="2025-11-01T00:21:52.902337587Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:52.905199 containerd[1505]: time="2025-11-01T00:21:52.905150165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:52.906202 containerd[1505]: time="2025-11-01T00:21:52.906162413Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.288795958s" Nov 1 00:21:52.906256 containerd[1505]: time="2025-11-01T00:21:52.906206669Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:21:52.907178 containerd[1505]: time="2025-11-01T00:21:52.907137556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:21:54.422063 containerd[1505]: time="2025-11-01T00:21:54.421953652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:54.423501 containerd[1505]: time="2025-11-01T00:21:54.423301505Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Nov 1 00:21:54.425112 containerd[1505]: time="2025-11-01T00:21:54.424749601Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:54.428514 containerd[1505]: time="2025-11-01T00:21:54.428486810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:54.429984 containerd[1505]: time="2025-11-01T00:21:54.429936300Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.52275819s" Nov 1 00:21:54.430027 containerd[1505]: time="2025-11-01T00:21:54.429992956Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:21:54.430619 containerd[1505]: time="2025-11-01T00:21:54.430595967Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:21:55.695632 containerd[1505]: time="2025-11-01T00:21:55.695574858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:55.696923 containerd[1505]: time="2025-11-01T00:21:55.696685679Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Nov 1 00:21:55.698364 containerd[1505]: time="2025-11-01T00:21:55.697972774Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:55.700741 containerd[1505]: time="2025-11-01T00:21:55.700712630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:55.701736 containerd[1505]: time="2025-11-01T00:21:55.701708602Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.27101937s" Nov 1 00:21:55.701779 containerd[1505]: time="2025-11-01T00:21:55.701736277Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:21:55.702838 containerd[1505]: time="2025-11-01T00:21:55.702796982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:21:56.764659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270510016.mount: Deactivated successfully. Nov 1 00:21:57.128721 containerd[1505]: time="2025-11-01T00:21:57.128635301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.130034 containerd[1505]: time="2025-11-01T00:21:57.129926798Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Nov 1 00:21:57.131691 containerd[1505]: time="2025-11-01T00:21:57.131022113Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.133314 containerd[1505]: time="2025-11-01T00:21:57.133232374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:57.134125 containerd[1505]: time="2025-11-01T00:21:57.133764501Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.430928664s" Nov 1 00:21:57.134125 containerd[1505]: time="2025-11-01T00:21:57.133800314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:21:57.134628 containerd[1505]: time="2025-11-01T00:21:57.134309401Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:21:57.620459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121004247.mount: Deactivated successfully. Nov 1 00:21:58.427106 containerd[1505]: time="2025-11-01T00:21:58.427041788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.428683 containerd[1505]: time="2025-11-01T00:21:58.428476030Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Nov 1 00:21:58.431706 containerd[1505]: time="2025-11-01T00:21:58.430176636Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.434586 containerd[1505]: time="2025-11-01T00:21:58.434547107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.435638 containerd[1505]: time="2025-11-01T00:21:58.435606205Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.301268481s" Nov 1 00:21:58.435638 containerd[1505]: time="2025-11-01T00:21:58.435638057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:21:58.437344 containerd[1505]: time="2025-11-01T00:21:58.437302812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:21:58.922984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408102393.mount: Deactivated successfully. Nov 1 00:21:58.932372 containerd[1505]: time="2025-11-01T00:21:58.932282685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.933705 containerd[1505]: time="2025-11-01T00:21:58.933589360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 1 00:21:58.935093 containerd[1505]: time="2025-11-01T00:21:58.935026820Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.938269 containerd[1505]: time="2025-11-01T00:21:58.938201872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:21:58.939708 containerd[1505]: time="2025-11-01T00:21:58.939360156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 502.015417ms" Nov 1 00:21:58.939708 containerd[1505]: time="2025-11-01T00:21:58.939408741Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:21:58.940695 containerd[1505]: time="2025-11-01T00:21:58.940625926Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:21:59.520324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount439230634.mount: Deactivated successfully. Nov 1 00:22:01.428977 containerd[1505]: time="2025-11-01T00:22:01.428883159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:01.430086 containerd[1505]: time="2025-11-01T00:22:01.430028411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Nov 1 00:22:01.431244 containerd[1505]: time="2025-11-01T00:22:01.430865345Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:01.433483 containerd[1505]: time="2025-11-01T00:22:01.433431198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:01.435256 containerd[1505]: time="2025-11-01T00:22:01.435222132Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.494555914s" Nov 1 00:22:01.435256 containerd[1505]: time="2025-11-01T00:22:01.435250452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:22:02.822573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 1 00:22:02.830048 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:02.963992 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:02.970035 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:03.012367 kubelet[2171]: E1101 00:22:03.012234 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:03.014024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:03.014248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:04.812377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:04.829036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:04.854935 systemd[1]: Reloading requested from client PID 2186 ('systemctl') (unit session-7.scope)... Nov 1 00:22:04.855066 systemd[1]: Reloading... Nov 1 00:22:04.967116 zram_generator::config[2224]: No configuration found. Nov 1 00:22:05.063800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:05.146367 systemd[1]: Reloading finished in 290 ms. Nov 1 00:22:05.200750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:05.204548 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:05.207095 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:05.207603 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:05.207983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:05.214197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:05.312995 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:05.313018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:05.356592 kubelet[2283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:05.356592 kubelet[2283]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:05.356592 kubelet[2283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:05.357019 kubelet[2283]: I1101 00:22:05.356648 2283 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:05.953897 kubelet[2283]: I1101 00:22:05.953808 2283 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:05.953897 kubelet[2283]: I1101 00:22:05.953852 2283 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:05.954249 kubelet[2283]: I1101 00:22:05.954215 2283 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:06.006823 kubelet[2283]: E1101 00:22:06.006769 2283 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://95.217.181.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:06.009133 kubelet[2283]: I1101 00:22:06.008905 2283 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:06.018224 kubelet[2283]: E1101 00:22:06.018163 2283 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:06.018224 kubelet[2283]: I1101 00:22:06.018190 2283 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:06.029726 kubelet[2283]: I1101 00:22:06.029353 2283 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:06.033602 kubelet[2283]: I1101 00:22:06.032930 2283 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:06.033602 kubelet[2283]: I1101 00:22:06.033170 2283 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a2a464dc28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:06.036981 kubelet[2283]: I1101 00:22:06.036732 2283 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:06.036981 kubelet[2283]: I1101 00:22:06.036766 2283 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:06.039079 kubelet[2283]: I1101 00:22:06.039057 2283 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:06.049779 kubelet[2283]: I1101 00:22:06.049486 2283 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:06.049779 kubelet[2283]: I1101 00:22:06.049522 2283 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:06.049779 kubelet[2283]: I1101 00:22:06.049546 2283 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:06.049779 kubelet[2283]: I1101 00:22:06.049558 2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:06.058982 kubelet[2283]: W1101 00:22:06.057236 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.181.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a2a464dc28&limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:06.058982 kubelet[2283]: E1101 00:22:06.057310 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.217.181.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a2a464dc28&limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:06.058982 kubelet[2283]: W1101 00:22:06.057822 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.181.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:06.058982 kubelet[2283]: E1101 00:22:06.057877 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.217.181.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:06.059135 kubelet[2283]: I1101 00:22:06.059115 2283 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:06.065188 kubelet[2283]: I1101 00:22:06.065023 2283 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:06.065188 kubelet[2283]: W1101 00:22:06.065098 2283 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:06.069588 kubelet[2283]: I1101 00:22:06.069190 2283 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:06.069588 kubelet[2283]: I1101 00:22:06.069233 2283 server.go:1287] "Started kubelet" Nov 1 00:22:06.071061 kubelet[2283]: I1101 00:22:06.071016 2283 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:06.071170 kubelet[2283]: I1101 00:22:06.071156 2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:06.072438 kubelet[2283]: I1101 00:22:06.072405 2283 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:06.074207 kubelet[2283]: I1101 00:22:06.073835 2283 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:06.074207 kubelet[2283]: I1101 00:22:06.074102 2283 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:06.082105 kubelet[2283]: I1101 00:22:06.082090 2283 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:06.085168 kubelet[2283]: I1101 00:22:06.085148 2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:06.086221 kubelet[2283]: E1101 00:22:06.086186 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" Nov 1 00:22:06.089219 kubelet[2283]: E1101 00:22:06.087502 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://95.217.181.13:6443/api/v1/namespaces/default/events\": dial tcp 95.217.181.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-a2a464dc28.1873ba200e095ca7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-a2a464dc28,UID:ci-4081-3-6-n-a2a464dc28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a2a464dc28,},FirstTimestamp:2025-11-01 00:22:06.069210279 +0000 UTC m=+0.751462285,LastTimestamp:2025-11-01 00:22:06.069210279 +0000 UTC m=+0.751462285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a2a464dc28,}" Nov 1 00:22:06.089657 kubelet[2283]: E1101 00:22:06.089461 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.181.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a2a464dc28?timeout=10s\": dial tcp 95.217.181.13:6443: connect: connection refused" interval="200ms" Nov 1 00:22:06.089657 kubelet[2283]: I1101 00:22:06.089531 2283 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:06.089657 kubelet[2283]: I1101 00:22:06.089560 2283 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:06.090146 kubelet[2283]: W1101 00:22:06.090108 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.181.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:06.090244 kubelet[2283]: E1101 00:22:06.090229 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.217.181.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:06.093407 kubelet[2283]: I1101 00:22:06.091283 2283 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:06.093407 kubelet[2283]: I1101 00:22:06.091387 2283 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:06.102169 kubelet[2283]: I1101 00:22:06.102130 2283 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:06.112425 kubelet[2283]: E1101 00:22:06.112236 2283 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:06.119705 kubelet[2283]: I1101 00:22:06.117762 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:06.119834 kubelet[2283]: I1101 00:22:06.119764 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:06.119834 kubelet[2283]: I1101 00:22:06.119783 2283 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:06.119992 kubelet[2283]: I1101 00:22:06.119963 2283 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:06.119992 kubelet[2283]: I1101 00:22:06.119978 2283 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:06.120056 kubelet[2283]: E1101 00:22:06.120027 2283 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:06.125258 kubelet[2283]: W1101 00:22:06.125149 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.181.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:06.125258 kubelet[2283]: E1101 00:22:06.125217 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.217.181.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:06.138856 kubelet[2283]: I1101 00:22:06.138541 2283 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:06.138856 kubelet[2283]: I1101 00:22:06.138567 2283 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:06.138856 kubelet[2283]: I1101 00:22:06.138587 2283 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:06.141269 kubelet[2283]: I1101 00:22:06.141005 2283 policy_none.go:49] "None policy: Start" Nov 1 00:22:06.141269 kubelet[2283]: I1101 00:22:06.141026 2283 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:06.141269 kubelet[2283]: I1101 00:22:06.141039 2283 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:06.146927 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:22:06.157975 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:22:06.170303 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:22:06.172198 kubelet[2283]: I1101 00:22:06.172168 2283 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:06.172394 kubelet[2283]: I1101 00:22:06.172366 2283 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:06.172438 kubelet[2283]: I1101 00:22:06.172383 2283 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:06.172950 kubelet[2283]: I1101 00:22:06.172925 2283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:06.174872 kubelet[2283]: E1101 00:22:06.174842 2283 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:06.174938 kubelet[2283]: E1101 00:22:06.174897 2283 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-a2a464dc28\" not found" Nov 1 00:22:06.240065 systemd[1]: Created slice kubepods-burstable-podca66ac987e61670a8d0eb032083d31c7.slice - libcontainer container kubepods-burstable-podca66ac987e61670a8d0eb032083d31c7.slice. Nov 1 00:22:06.266599 kubelet[2283]: E1101 00:22:06.266495 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.274819 systemd[1]: Created slice kubepods-burstable-pod1c010350a89ca9af82c9473c442f6e91.slice - libcontainer container kubepods-burstable-pod1c010350a89ca9af82c9473c442f6e91.slice. Nov 1 00:22:06.277762 kubelet[2283]: I1101 00:22:06.277715 2283 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.279724 kubelet[2283]: E1101 00:22:06.279584 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://95.217.181.13:6443/api/v1/nodes\": dial tcp 95.217.181.13:6443: connect: connection refused" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.286123 kubelet[2283]: E1101 00:22:06.286080 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.290912 systemd[1]: Created slice kubepods-burstable-pod6f7ee3b28814e68be6aad1025a9c8f30.slice - libcontainer container kubepods-burstable-pod6f7ee3b28814e68be6aad1025a9c8f30.slice. Nov 1 00:22:06.294123 kubelet[2283]: E1101 00:22:06.294071 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.181.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a2a464dc28?timeout=10s\": dial tcp 95.217.181.13:6443: connect: connection refused" interval="400ms" Nov 1 00:22:06.294954 kubelet[2283]: E1101 00:22:06.294922 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.391740 kubelet[2283]: I1101 00:22:06.391461 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.391740 kubelet[2283]: I1101 00:22:06.391730 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392596 kubelet[2283]: I1101 00:22:06.391784 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392596 kubelet[2283]: I1101 00:22:06.391832 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392596 kubelet[2283]: I1101 00:22:06.391880 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392596 kubelet[2283]: I1101 00:22:06.391920 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392596 kubelet[2283]: I1101 00:22:06.391958 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392940 kubelet[2283]: I1101 00:22:06.391999 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.392940 kubelet[2283]: I1101 00:22:06.392048 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f7ee3b28814e68be6aad1025a9c8f30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a2a464dc28\" (UID: \"6f7ee3b28814e68be6aad1025a9c8f30\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.483473 kubelet[2283]: I1101 00:22:06.483404 2283 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.484047 kubelet[2283]: E1101 00:22:06.483944 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://95.217.181.13:6443/api/v1/nodes\": dial tcp 95.217.181.13:6443: connect: connection refused" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.568863 containerd[1505]: time="2025-11-01T00:22:06.568652207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a2a464dc28,Uid:ca66ac987e61670a8d0eb032083d31c7,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:06.588695 containerd[1505]: time="2025-11-01T00:22:06.588542006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a2a464dc28,Uid:1c010350a89ca9af82c9473c442f6e91,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:06.598900 containerd[1505]: time="2025-11-01T00:22:06.598825064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a2a464dc28,Uid:6f7ee3b28814e68be6aad1025a9c8f30,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:06.695077 kubelet[2283]: E1101 00:22:06.694998 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.181.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a2a464dc28?timeout=10s\": dial tcp 95.217.181.13:6443: connect: connection refused" interval="800ms" Nov 1 00:22:06.886940 kubelet[2283]: I1101 00:22:06.886901 2283 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:06.887648 kubelet[2283]: E1101 00:22:06.887524 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://95.217.181.13:6443/api/v1/nodes\": dial tcp 95.217.181.13:6443: connect: connection refused" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:07.032962 kubelet[2283]: W1101 00:22:07.032707 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://95.217.181.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a2a464dc28&limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:07.032962 kubelet[2283]: E1101 00:22:07.032867 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://95.217.181.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-a2a464dc28&limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:07.094786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059844726.mount: Deactivated successfully. Nov 1 00:22:07.105191 containerd[1505]: time="2025-11-01T00:22:07.105089082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:07.106697 containerd[1505]: time="2025-11-01T00:22:07.106621812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:07.108399 containerd[1505]: time="2025-11-01T00:22:07.108255025Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:07.110177 containerd[1505]: time="2025-11-01T00:22:07.110037463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:07.112428 containerd[1505]: time="2025-11-01T00:22:07.112323102Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:07.116521 containerd[1505]: time="2025-11-01T00:22:07.116424226Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:07.117630 containerd[1505]: time="2025-11-01T00:22:07.117517140Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 1 00:22:07.125405 containerd[1505]: time="2025-11-01T00:22:07.124767969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:07.126918 containerd[1505]: time="2025-11-01T00:22:07.126845479Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 538.091398ms" Nov 1 00:22:07.131320 containerd[1505]: time="2025-11-01T00:22:07.131271432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.439796ms" Nov 1 00:22:07.132630 containerd[1505]: time="2025-11-01T00:22:07.132542768Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.598699ms" Nov 1 00:22:07.253789 kubelet[2283]: W1101 00:22:07.253283 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://95.217.181.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:07.253789 kubelet[2283]: E1101 00:22:07.253382 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://95.217.181.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:07.296695 kubelet[2283]: W1101 00:22:07.295103 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://95.217.181.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:07.297738 kubelet[2283]: E1101 00:22:07.296393 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://95.217.181.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:07.303902 containerd[1505]: time="2025-11-01T00:22:07.300035342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:07.303902 containerd[1505]: time="2025-11-01T00:22:07.300116369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:07.303902 containerd[1505]: time="2025-11-01T00:22:07.300133668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.303902 containerd[1505]: time="2025-11-01T00:22:07.300257966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.319024 containerd[1505]: time="2025-11-01T00:22:07.318901943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:07.319447 containerd[1505]: time="2025-11-01T00:22:07.318975162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:07.328683 containerd[1505]: time="2025-11-01T00:22:07.322966040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.328683 containerd[1505]: time="2025-11-01T00:22:07.323096722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.332032 systemd[1]: Started cri-containerd-9e489bc6366899a12451abd05656697eadf0dd81fec09145af97a9bf726eeeed.scope - libcontainer container 9e489bc6366899a12451abd05656697eadf0dd81fec09145af97a9bf726eeeed. Nov 1 00:22:07.341591 containerd[1505]: time="2025-11-01T00:22:07.340324177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:07.341591 containerd[1505]: time="2025-11-01T00:22:07.340416781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:07.341591 containerd[1505]: time="2025-11-01T00:22:07.340442129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.341591 containerd[1505]: time="2025-11-01T00:22:07.340519038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:07.342452 systemd[1]: Started cri-containerd-6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691.scope - libcontainer container 6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691. Nov 1 00:22:07.358887 systemd[1]: Started cri-containerd-9e131b988666fdff20a384a453c582620a5027435ec5fae37b91b1aa48a4a017.scope - libcontainer container 9e131b988666fdff20a384a453c582620a5027435ec5fae37b91b1aa48a4a017. Nov 1 00:22:07.386138 containerd[1505]: time="2025-11-01T00:22:07.385952668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-a2a464dc28,Uid:ca66ac987e61670a8d0eb032083d31c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e489bc6366899a12451abd05656697eadf0dd81fec09145af97a9bf726eeeed\"" Nov 1 00:22:07.388935 containerd[1505]: time="2025-11-01T00:22:07.388802741Z" level=info msg="CreateContainer within sandbox \"9e489bc6366899a12451abd05656697eadf0dd81fec09145af97a9bf726eeeed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:07.410392 containerd[1505]: time="2025-11-01T00:22:07.409764452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-a2a464dc28,Uid:1c010350a89ca9af82c9473c442f6e91,Namespace:kube-system,Attempt:0,} returns sandbox id \"6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691\"" Nov 1 00:22:07.413385 containerd[1505]: time="2025-11-01T00:22:07.413251577Z" level=info msg="CreateContainer within sandbox \"6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:07.427380 containerd[1505]: time="2025-11-01T00:22:07.427280644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-a2a464dc28,Uid:6f7ee3b28814e68be6aad1025a9c8f30,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e131b988666fdff20a384a453c582620a5027435ec5fae37b91b1aa48a4a017\"" Nov 1 00:22:07.429652 containerd[1505]: time="2025-11-01T00:22:07.429594980Z" level=info msg="CreateContainer within sandbox \"9e131b988666fdff20a384a453c582620a5027435ec5fae37b91b1aa48a4a017\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:07.432732 containerd[1505]: time="2025-11-01T00:22:07.432624838Z" level=info msg="CreateContainer within sandbox \"9e489bc6366899a12451abd05656697eadf0dd81fec09145af97a9bf726eeeed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7dac7e0a6668e2c9935a1c35a414d00482e108cf9dbea8be5e843586b3da92e\"" Nov 1 00:22:07.434092 containerd[1505]: time="2025-11-01T00:22:07.433222580Z" level=info msg="StartContainer for \"e7dac7e0a6668e2c9935a1c35a414d00482e108cf9dbea8be5e843586b3da92e\"" Nov 1 00:22:07.434804 containerd[1505]: time="2025-11-01T00:22:07.434774895Z" level=info msg="CreateContainer within sandbox \"6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05\"" Nov 1 00:22:07.435281 containerd[1505]: time="2025-11-01T00:22:07.435268465Z" level=info msg="StartContainer for \"bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05\"" Nov 1 00:22:07.450843 containerd[1505]: time="2025-11-01T00:22:07.450808572Z" level=info msg="CreateContainer within sandbox \"9e131b988666fdff20a384a453c582620a5027435ec5fae37b91b1aa48a4a017\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"46bda5760cbf5ea2e3b660f9b729d2d9317c091203b4d645b7d19f88be2fb80f\"" Nov 1 00:22:07.451417 containerd[1505]: time="2025-11-01T00:22:07.451402424Z" level=info msg="StartContainer for \"46bda5760cbf5ea2e3b660f9b729d2d9317c091203b4d645b7d19f88be2fb80f\"" Nov 1 00:22:07.460927 systemd[1]: Started cri-containerd-e7dac7e0a6668e2c9935a1c35a414d00482e108cf9dbea8be5e843586b3da92e.scope - libcontainer container e7dac7e0a6668e2c9935a1c35a414d00482e108cf9dbea8be5e843586b3da92e. Nov 1 00:22:07.469895 systemd[1]: Started cri-containerd-bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05.scope - libcontainer container bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05. Nov 1 00:22:07.491341 systemd[1]: Started cri-containerd-46bda5760cbf5ea2e3b660f9b729d2d9317c091203b4d645b7d19f88be2fb80f.scope - libcontainer container 46bda5760cbf5ea2e3b660f9b729d2d9317c091203b4d645b7d19f88be2fb80f. Nov 1 00:22:07.495787 kubelet[2283]: E1101 00:22:07.495754 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://95.217.181.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-a2a464dc28?timeout=10s\": dial tcp 95.217.181.13:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:07.520653 containerd[1505]: time="2025-11-01T00:22:07.519926643Z" level=info msg="StartContainer for \"e7dac7e0a6668e2c9935a1c35a414d00482e108cf9dbea8be5e843586b3da92e\" returns successfully" Nov 1 00:22:07.539402 containerd[1505]: time="2025-11-01T00:22:07.539347554Z" level=info msg="StartContainer for \"bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05\" returns successfully" Nov 1 00:22:07.564308 containerd[1505]: time="2025-11-01T00:22:07.564192435Z" level=info msg="StartContainer for \"46bda5760cbf5ea2e3b660f9b729d2d9317c091203b4d645b7d19f88be2fb80f\" returns successfully" Nov 1 00:22:07.630981 kubelet[2283]: W1101 00:22:07.630919 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://95.217.181.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 95.217.181.13:6443: connect: connection refused Nov 1 00:22:07.631120 kubelet[2283]: E1101 00:22:07.630990 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://95.217.181.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 95.217.181.13:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:07.691582 kubelet[2283]: I1101 00:22:07.691284 2283 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:07.691582 kubelet[2283]: E1101 00:22:07.691557 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://95.217.181.13:6443/api/v1/nodes\": dial tcp 95.217.181.13:6443: connect: connection refused" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:08.138366 kubelet[2283]: E1101 00:22:08.138093 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:08.140911 kubelet[2283]: E1101 00:22:08.140900 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:08.142630 kubelet[2283]: E1101 00:22:08.142527 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.147823 kubelet[2283]: E1101 00:22:09.147253 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.149034 kubelet[2283]: E1101 00:22:09.148885 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.174230 kubelet[2283]: E1101 00:22:09.174179 2283 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-a2a464dc28\" not found" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.294125 kubelet[2283]: I1101 00:22:09.294090 2283 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.310011 kubelet[2283]: I1101 00:22:09.309958 2283 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.310011 kubelet[2283]: E1101 00:22:09.310001 2283 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-a2a464dc28\": node \"ci-4081-3-6-n-a2a464dc28\" not found" Nov 1 00:22:09.333888 kubelet[2283]: E1101 00:22:09.333825 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-a2a464dc28\" not found" Nov 1 00:22:09.487220 kubelet[2283]: I1101 00:22:09.487049 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.497180 kubelet[2283]: E1101 00:22:09.497124 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.497180 kubelet[2283]: I1101 00:22:09.497165 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.499462 kubelet[2283]: E1101 00:22:09.499412 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.499462 kubelet[2283]: I1101 00:22:09.499447 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:09.501462 kubelet[2283]: E1101 00:22:09.501413 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-a2a464dc28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:10.059907 kubelet[2283]: I1101 00:22:10.059823 2283 apiserver.go:52] "Watching apiserver" Nov 1 00:22:10.090619 kubelet[2283]: I1101 00:22:10.090545 2283 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:11.284863 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-7.scope)... Nov 1 00:22:11.284891 systemd[1]: Reloading... Nov 1 00:22:11.423719 zram_generator::config[2601]: No configuration found. Nov 1 00:22:11.522817 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:11.611884 systemd[1]: Reloading finished in 326 ms. Nov 1 00:22:11.678044 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:11.697599 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:11.697969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:11.698046 systemd[1]: kubelet.service: Consumed 1.217s CPU time, 130.6M memory peak, 0B memory swap peak. Nov 1 00:22:11.706279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:11.849308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:11.853050 (kubelet)[2652]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:11.933805 kubelet[2652]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:11.933805 kubelet[2652]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:11.933805 kubelet[2652]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:11.933805 kubelet[2652]: I1101 00:22:11.933193 2652 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:11.942004 kubelet[2652]: I1101 00:22:11.941960 2652 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:11.942004 kubelet[2652]: I1101 00:22:11.941994 2652 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:11.942368 kubelet[2652]: I1101 00:22:11.942338 2652 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:11.946148 kubelet[2652]: I1101 00:22:11.946110 2652 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:22:11.950215 kubelet[2652]: I1101 00:22:11.949835 2652 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:11.970377 kubelet[2652]: E1101 00:22:11.970311 2652 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:11.971717 kubelet[2652]: I1101 00:22:11.970503 2652 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:11.975565 kubelet[2652]: I1101 00:22:11.975520 2652 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:11.975817 kubelet[2652]: I1101 00:22:11.975762 2652 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:11.976171 kubelet[2652]: I1101 00:22:11.975812 2652 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-a2a464dc28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:11.976316 kubelet[2652]: I1101 00:22:11.976177 2652 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:11.976316 kubelet[2652]: I1101 00:22:11.976190 2652 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:11.976316 kubelet[2652]: I1101 00:22:11.976255 2652 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:11.976443 kubelet[2652]: I1101 00:22:11.976404 2652 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:11.976443 kubelet[2652]: I1101 00:22:11.976425 2652 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:11.976522 kubelet[2652]: I1101 00:22:11.976451 2652 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:11.976522 kubelet[2652]: I1101 00:22:11.976464 2652 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:11.986985 kubelet[2652]: I1101 00:22:11.986954 2652 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:11.987416 kubelet[2652]: I1101 00:22:11.987388 2652 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:11.987917 kubelet[2652]: I1101 00:22:11.987892 2652 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:11.987996 kubelet[2652]: I1101 00:22:11.987932 2652 server.go:1287] "Started kubelet" Nov 1 00:22:11.990582 kubelet[2652]: I1101 00:22:11.990340 2652 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:12.007772 kubelet[2652]: I1101 00:22:12.007730 2652 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:12.013423 kubelet[2652]: E1101 00:22:12.012543 2652 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:12.015418 kubelet[2652]: I1101 00:22:12.014110 2652 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:12.015418 kubelet[2652]: I1101 00:22:12.014440 2652 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:12.015418 kubelet[2652]: I1101 00:22:12.014605 2652 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:12.019082 kubelet[2652]: I1101 00:22:12.018642 2652 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:12.019082 kubelet[2652]: I1101 00:22:12.018743 2652 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:12.019082 kubelet[2652]: I1101 00:22:12.018874 2652 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:12.020235 kubelet[2652]: I1101 00:22:12.020210 2652 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:12.020316 kubelet[2652]: I1101 00:22:12.020291 2652 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:12.022322 kubelet[2652]: I1101 00:22:12.022282 2652 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:12.023354 kubelet[2652]: I1101 00:22:12.023310 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:12.025988 kubelet[2652]: I1101 00:22:12.025973 2652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:12.026094 kubelet[2652]: I1101 00:22:12.026085 2652 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:12.026194 kubelet[2652]: I1101 00:22:12.026185 2652 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:12.026277 kubelet[2652]: I1101 00:22:12.026246 2652 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:12.026401 kubelet[2652]: E1101 00:22:12.026377 2652 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:12.032107 kubelet[2652]: I1101 00:22:12.032065 2652 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:12.074996 kubelet[2652]: I1101 00:22:12.074940 2652 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:12.074996 kubelet[2652]: I1101 00:22:12.074961 2652 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:12.074996 kubelet[2652]: I1101 00:22:12.074977 2652 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:12.075239 kubelet[2652]: I1101 00:22:12.075114 2652 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:12.075239 kubelet[2652]: I1101 00:22:12.075123 2652 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:12.075239 kubelet[2652]: I1101 00:22:12.075139 2652 policy_none.go:49] "None policy: Start" Nov 1 00:22:12.075239 kubelet[2652]: I1101 00:22:12.075148 2652 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:12.075239 kubelet[2652]: I1101 00:22:12.075156 2652 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:12.075377 kubelet[2652]: I1101 00:22:12.075252 2652 state_mem.go:75] "Updated machine memory state" Nov 1 00:22:12.079172 kubelet[2652]: I1101 00:22:12.079141 2652 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:12.079303 kubelet[2652]: I1101 00:22:12.079279 2652 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:12.079339 kubelet[2652]: I1101 00:22:12.079295 2652 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:12.081382 kubelet[2652]: I1101 00:22:12.080638 2652 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:12.083473 kubelet[2652]: E1101 00:22:12.083143 2652 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:12.127426 kubelet[2652]: I1101 00:22:12.127345 2652 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.128558 kubelet[2652]: I1101 00:22:12.128268 2652 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.128558 kubelet[2652]: I1101 00:22:12.128426 2652 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.187979 kubelet[2652]: I1101 00:22:12.187718 2652 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.202087 kubelet[2652]: I1101 00:22:12.201465 2652 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.202087 kubelet[2652]: I1101 00:22:12.201598 2652 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320448 kubelet[2652]: I1101 00:22:12.320383 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320448 kubelet[2652]: I1101 00:22:12.320453 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320734 kubelet[2652]: I1101 00:22:12.320488 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320734 kubelet[2652]: I1101 00:22:12.320525 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320734 kubelet[2652]: I1101 00:22:12.320552 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320734 kubelet[2652]: I1101 00:22:12.320578 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f7ee3b28814e68be6aad1025a9c8f30-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-a2a464dc28\" (UID: \"6f7ee3b28814e68be6aad1025a9c8f30\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320734 kubelet[2652]: I1101 00:22:12.320603 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320968 kubelet[2652]: I1101 00:22:12.320645 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca66ac987e61670a8d0eb032083d31c7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" (UID: \"ca66ac987e61670a8d0eb032083d31c7\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.320968 kubelet[2652]: I1101 00:22:12.320716 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c010350a89ca9af82c9473c442f6e91-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-a2a464dc28\" (UID: \"1c010350a89ca9af82c9473c442f6e91\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:12.984424 kubelet[2652]: I1101 00:22:12.984293 2652 apiserver.go:52] "Watching apiserver" Nov 1 00:22:13.019700 kubelet[2652]: I1101 00:22:13.018861 2652 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:13.067782 kubelet[2652]: I1101 00:22:13.066922 2652 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:13.078190 kubelet[2652]: I1101 00:22:13.078109 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" podStartSLOduration=1.078084853 podStartE2EDuration="1.078084853s" podCreationTimestamp="2025-11-01 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:13.058887393 +0000 UTC m=+1.198400236" watchObservedRunningTime="2025-11-01 00:22:13.078084853 +0000 UTC m=+1.217597717" Nov 1 00:22:13.078951 kubelet[2652]: E1101 00:22:13.078576 2652 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-a2a464dc28\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:13.090185 kubelet[2652]: I1101 00:22:13.090123 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-a2a464dc28" podStartSLOduration=1.090104696 podStartE2EDuration="1.090104696s" podCreationTimestamp="2025-11-01 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:13.078376406 +0000 UTC m=+1.217889259" watchObservedRunningTime="2025-11-01 00:22:13.090104696 +0000 UTC m=+1.229617549" Nov 1 00:22:13.111186 kubelet[2652]: I1101 00:22:13.110817 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-a2a464dc28" podStartSLOduration=1.110792183 podStartE2EDuration="1.110792183s" podCreationTimestamp="2025-11-01 00:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:13.090249125 +0000 UTC m=+1.229761968" watchObservedRunningTime="2025-11-01 00:22:13.110792183 +0000 UTC m=+1.250305026" Nov 1 00:22:17.455215 kubelet[2652]: I1101 00:22:17.454893 2652 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:17.456058 kubelet[2652]: I1101 00:22:17.455932 2652 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:17.457045 containerd[1505]: time="2025-11-01T00:22:17.455489707Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:18.398391 systemd[1]: Created slice kubepods-besteffort-pode7a8aba7_8d1a_4832_a8c0_95e492ac7b90.slice - libcontainer container kubepods-besteffort-pode7a8aba7_8d1a_4832_a8c0_95e492ac7b90.slice. Nov 1 00:22:18.460396 kubelet[2652]: I1101 00:22:18.460327 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a8aba7-8d1a-4832-a8c0-95e492ac7b90-xtables-lock\") pod \"kube-proxy-tjfhf\" (UID: \"e7a8aba7-8d1a-4832-a8c0-95e492ac7b90\") " pod="kube-system/kube-proxy-tjfhf" Nov 1 00:22:18.460396 kubelet[2652]: I1101 00:22:18.460401 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7a8aba7-8d1a-4832-a8c0-95e492ac7b90-kube-proxy\") pod \"kube-proxy-tjfhf\" (UID: \"e7a8aba7-8d1a-4832-a8c0-95e492ac7b90\") " pod="kube-system/kube-proxy-tjfhf" Nov 1 00:22:18.460396 kubelet[2652]: I1101 00:22:18.460429 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a8aba7-8d1a-4832-a8c0-95e492ac7b90-lib-modules\") pod \"kube-proxy-tjfhf\" (UID: \"e7a8aba7-8d1a-4832-a8c0-95e492ac7b90\") " pod="kube-system/kube-proxy-tjfhf" Nov 1 00:22:18.461140 kubelet[2652]: I1101 00:22:18.460458 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dphmz\" (UniqueName: \"kubernetes.io/projected/e7a8aba7-8d1a-4832-a8c0-95e492ac7b90-kube-api-access-dphmz\") pod \"kube-proxy-tjfhf\" (UID: \"e7a8aba7-8d1a-4832-a8c0-95e492ac7b90\") " pod="kube-system/kube-proxy-tjfhf" Nov 1 00:22:18.549113 systemd[1]: Created slice kubepods-besteffort-podef851d64_5079_4813_a125_cc67a8fefecc.slice - libcontainer container kubepods-besteffort-podef851d64_5079_4813_a125_cc67a8fefecc.slice. Nov 1 00:22:18.561455 kubelet[2652]: I1101 00:22:18.561077 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef851d64-5079-4813-a125-cc67a8fefecc-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rgfqf\" (UID: \"ef851d64-5079-4813-a125-cc67a8fefecc\") " pod="tigera-operator/tigera-operator-7dcd859c48-rgfqf" Nov 1 00:22:18.561455 kubelet[2652]: I1101 00:22:18.561260 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfw9m\" (UniqueName: \"kubernetes.io/projected/ef851d64-5079-4813-a125-cc67a8fefecc-kube-api-access-qfw9m\") pod \"tigera-operator-7dcd859c48-rgfqf\" (UID: \"ef851d64-5079-4813-a125-cc67a8fefecc\") " pod="tigera-operator/tigera-operator-7dcd859c48-rgfqf" Nov 1 00:22:18.709598 containerd[1505]: time="2025-11-01T00:22:18.709413924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjfhf,Uid:e7a8aba7-8d1a-4832-a8c0-95e492ac7b90,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:18.743714 containerd[1505]: time="2025-11-01T00:22:18.743480593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:18.743714 containerd[1505]: time="2025-11-01T00:22:18.743581970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:18.743714 containerd[1505]: time="2025-11-01T00:22:18.743611484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:18.746573 containerd[1505]: time="2025-11-01T00:22:18.744439780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:18.779860 systemd[1]: Started cri-containerd-00f06d31e56a9c7a0b4fe1c32d73e03423493173b869a8f5e92e355b90999c03.scope - libcontainer container 00f06d31e56a9c7a0b4fe1c32d73e03423493173b869a8f5e92e355b90999c03. Nov 1 00:22:18.819057 containerd[1505]: time="2025-11-01T00:22:18.818987460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tjfhf,Uid:e7a8aba7-8d1a-4832-a8c0-95e492ac7b90,Namespace:kube-system,Attempt:0,} returns sandbox id \"00f06d31e56a9c7a0b4fe1c32d73e03423493173b869a8f5e92e355b90999c03\"" Nov 1 00:22:18.825723 containerd[1505]: time="2025-11-01T00:22:18.824534983Z" level=info msg="CreateContainer within sandbox \"00f06d31e56a9c7a0b4fe1c32d73e03423493173b869a8f5e92e355b90999c03\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:18.847154 containerd[1505]: time="2025-11-01T00:22:18.847091317Z" level=info msg="CreateContainer within sandbox \"00f06d31e56a9c7a0b4fe1c32d73e03423493173b869a8f5e92e355b90999c03\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d70c7e0eb8882db0db38b2c27df890ae19a32756aa684e10efb2d93b0e71bcdb\"" Nov 1 00:22:18.847852 containerd[1505]: time="2025-11-01T00:22:18.847807894Z" level=info msg="StartContainer for \"d70c7e0eb8882db0db38b2c27df890ae19a32756aa684e10efb2d93b0e71bcdb\"" Nov 1 00:22:18.857824 containerd[1505]: time="2025-11-01T00:22:18.857238059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rgfqf,Uid:ef851d64-5079-4813-a125-cc67a8fefecc,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:18.887059 systemd[1]: Started cri-containerd-d70c7e0eb8882db0db38b2c27df890ae19a32756aa684e10efb2d93b0e71bcdb.scope - libcontainer container d70c7e0eb8882db0db38b2c27df890ae19a32756aa684e10efb2d93b0e71bcdb. Nov 1 00:22:18.898295 containerd[1505]: time="2025-11-01T00:22:18.898017198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:18.898458 containerd[1505]: time="2025-11-01T00:22:18.898315830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:18.898531 containerd[1505]: time="2025-11-01T00:22:18.898423249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:18.898892 containerd[1505]: time="2025-11-01T00:22:18.898829821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:18.919823 systemd[1]: Started cri-containerd-ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1.scope - libcontainer container ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1. Nov 1 00:22:18.944262 containerd[1505]: time="2025-11-01T00:22:18.944152792Z" level=info msg="StartContainer for \"d70c7e0eb8882db0db38b2c27df890ae19a32756aa684e10efb2d93b0e71bcdb\" returns successfully" Nov 1 00:22:18.959469 containerd[1505]: time="2025-11-01T00:22:18.959408095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rgfqf,Uid:ef851d64-5079-4813-a125-cc67a8fefecc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1\"" Nov 1 00:22:18.963234 containerd[1505]: time="2025-11-01T00:22:18.962911324Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:21.539422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1428112141.mount: Deactivated successfully. Nov 1 00:22:22.240318 kubelet[2652]: I1101 00:22:22.239970 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tjfhf" podStartSLOduration=4.23995307 podStartE2EDuration="4.23995307s" podCreationTimestamp="2025-11-01 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:19.106572816 +0000 UTC m=+7.246085689" watchObservedRunningTime="2025-11-01 00:22:22.23995307 +0000 UTC m=+10.379465913" Nov 1 00:22:22.263150 containerd[1505]: time="2025-11-01T00:22:22.263103210Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.264278 containerd[1505]: time="2025-11-01T00:22:22.264180120Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:22.265580 containerd[1505]: time="2025-11-01T00:22:22.265557722Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.267603 containerd[1505]: time="2025-11-01T00:22:22.267570393Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.268592 containerd[1505]: time="2025-11-01T00:22:22.268113708Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.305159632s" Nov 1 00:22:22.268592 containerd[1505]: time="2025-11-01T00:22:22.268135574Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:22.269957 containerd[1505]: time="2025-11-01T00:22:22.269834333Z" level=info msg="CreateContainer within sandbox \"ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:22.285140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078749918.mount: Deactivated successfully. Nov 1 00:22:22.292034 containerd[1505]: time="2025-11-01T00:22:22.291977042Z" level=info msg="CreateContainer within sandbox \"ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff\"" Nov 1 00:22:22.292530 containerd[1505]: time="2025-11-01T00:22:22.292500455Z" level=info msg="StartContainer for \"135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff\"" Nov 1 00:22:22.323799 systemd[1]: Started cri-containerd-135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff.scope - libcontainer container 135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff. Nov 1 00:22:22.351708 containerd[1505]: time="2025-11-01T00:22:22.351562346Z" level=info msg="StartContainer for \"135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff\" returns successfully" Nov 1 00:22:23.106686 kubelet[2652]: I1101 00:22:23.106564 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rgfqf" podStartSLOduration=1.7989028390000001 podStartE2EDuration="5.106547105s" podCreationTimestamp="2025-11-01 00:22:18 +0000 UTC" firstStartedPulling="2025-11-01 00:22:18.961062722 +0000 UTC m=+7.100575564" lastFinishedPulling="2025-11-01 00:22:22.268706988 +0000 UTC m=+10.408219830" observedRunningTime="2025-11-01 00:22:23.105756846 +0000 UTC m=+11.245269689" watchObservedRunningTime="2025-11-01 00:22:23.106547105 +0000 UTC m=+11.246059949" Nov 1 00:22:28.484488 sudo[1790]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:28.651115 sshd[1787]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:28.656976 systemd[1]: sshd@6-95.217.181.13:22-147.75.109.163:36818.service: Deactivated successfully. Nov 1 00:22:28.659692 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:22:28.659991 systemd[1]: session-7.scope: Consumed 5.621s CPU time, 141.9M memory peak, 0B memory swap peak. Nov 1 00:22:28.660557 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:22:28.661497 systemd-logind[1487]: Removed session 7. Nov 1 00:22:33.139348 systemd[1]: Created slice kubepods-besteffort-podaec02d03_615c_4127_aa8a_461042156b02.slice - libcontainer container kubepods-besteffort-podaec02d03_615c_4127_aa8a_461042156b02.slice. Nov 1 00:22:33.159841 kubelet[2652]: I1101 00:22:33.159794 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aec02d03-615c-4127-aa8a-461042156b02-typha-certs\") pod \"calico-typha-5fd89fc4fc-z44pw\" (UID: \"aec02d03-615c-4127-aa8a-461042156b02\") " pod="calico-system/calico-typha-5fd89fc4fc-z44pw" Nov 1 00:22:33.159841 kubelet[2652]: I1101 00:22:33.159837 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aec02d03-615c-4127-aa8a-461042156b02-tigera-ca-bundle\") pod \"calico-typha-5fd89fc4fc-z44pw\" (UID: \"aec02d03-615c-4127-aa8a-461042156b02\") " pod="calico-system/calico-typha-5fd89fc4fc-z44pw" Nov 1 00:22:33.159841 kubelet[2652]: I1101 00:22:33.159866 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skgj\" (UniqueName: \"kubernetes.io/projected/aec02d03-615c-4127-aa8a-461042156b02-kube-api-access-4skgj\") pod \"calico-typha-5fd89fc4fc-z44pw\" (UID: \"aec02d03-615c-4127-aa8a-461042156b02\") " pod="calico-system/calico-typha-5fd89fc4fc-z44pw" Nov 1 00:22:33.361454 kubelet[2652]: I1101 00:22:33.361407 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-cni-bin-dir\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361631 kubelet[2652]: I1101 00:22:33.361470 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-node-certs\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361631 kubelet[2652]: I1101 00:22:33.361511 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-xtables-lock\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361631 kubelet[2652]: I1101 00:22:33.361533 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-policysync\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361631 kubelet[2652]: I1101 00:22:33.361556 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-cni-log-dir\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361631 kubelet[2652]: I1101 00:22:33.361574 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h547p\" (UniqueName: \"kubernetes.io/projected/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-kube-api-access-h547p\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361778 kubelet[2652]: I1101 00:22:33.361593 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-var-run-calico\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361778 kubelet[2652]: I1101 00:22:33.361615 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-cni-net-dir\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361778 kubelet[2652]: I1101 00:22:33.361635 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-flexvol-driver-host\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361839 kubelet[2652]: I1101 00:22:33.361657 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-lib-modules\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361839 kubelet[2652]: I1101 00:22:33.361806 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-tigera-ca-bundle\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.361882 kubelet[2652]: I1101 00:22:33.361827 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c65813c8-5c2c-4b95-a019-e0892fdbe2bf-var-lib-calico\") pod \"calico-node-lzsh4\" (UID: \"c65813c8-5c2c-4b95-a019-e0892fdbe2bf\") " pod="calico-system/calico-node-lzsh4" Nov 1 00:22:33.363067 systemd[1]: Created slice kubepods-besteffort-podc65813c8_5c2c_4b95_a019_e0892fdbe2bf.slice - libcontainer container kubepods-besteffort-podc65813c8_5c2c_4b95_a019_e0892fdbe2bf.slice. Nov 1 00:22:33.465712 containerd[1505]: time="2025-11-01T00:22:33.465319792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fd89fc4fc-z44pw,Uid:aec02d03-615c-4127-aa8a-461042156b02,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:33.508749 kubelet[2652]: E1101 00:22:33.507254 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.508749 kubelet[2652]: W1101 00:22:33.507287 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.512169 kubelet[2652]: E1101 00:22:33.512132 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.542559 containerd[1505]: time="2025-11-01T00:22:33.542441188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:33.543011 containerd[1505]: time="2025-11-01T00:22:33.542540562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:33.543011 containerd[1505]: time="2025-11-01T00:22:33.542561714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.543011 containerd[1505]: time="2025-11-01T00:22:33.542737063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.564932 kubelet[2652]: E1101 00:22:33.564523 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:33.599379 systemd[1]: Started cri-containerd-93c962afc9c60a9d3c399b3330e095c9fb8a800a6fa19c9787024ccd6abdfe29.scope - libcontainer container 93c962afc9c60a9d3c399b3330e095c9fb8a800a6fa19c9787024ccd6abdfe29. Nov 1 00:22:33.655715 kubelet[2652]: E1101 00:22:33.655644 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.655715 kubelet[2652]: W1101 00:22:33.655720 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.656089 kubelet[2652]: E1101 00:22:33.655746 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.657333 kubelet[2652]: E1101 00:22:33.656407 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.657333 kubelet[2652]: W1101 00:22:33.656431 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.657333 kubelet[2652]: E1101 00:22:33.656458 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.658786 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661143 kubelet[2652]: W1101 00:22:33.658801 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659011 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659408 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661143 kubelet[2652]: W1101 00:22:33.659416 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659424 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659617 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661143 kubelet[2652]: W1101 00:22:33.659625 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659632 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661143 kubelet[2652]: E1101 00:22:33.659934 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661376 kubelet[2652]: W1101 00:22:33.659941 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.659950 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.660109 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661376 kubelet[2652]: W1101 00:22:33.660117 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.660124 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.660342 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661376 kubelet[2652]: W1101 00:22:33.660349 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.660357 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661376 kubelet[2652]: E1101 00:22:33.660529 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661376 kubelet[2652]: W1101 00:22:33.660535 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661556 kubelet[2652]: E1101 00:22:33.660542 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661556 kubelet[2652]: E1101 00:22:33.660807 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661556 kubelet[2652]: W1101 00:22:33.660815 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661556 kubelet[2652]: E1101 00:22:33.660822 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.661556 kubelet[2652]: E1101 00:22:33.661019 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.661556 kubelet[2652]: W1101 00:22:33.661089 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.661556 kubelet[2652]: E1101 00:22:33.661097 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.662214 kubelet[2652]: E1101 00:22:33.661821 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.662214 kubelet[2652]: W1101 00:22:33.661829 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.662214 kubelet[2652]: E1101 00:22:33.661837 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.662214 kubelet[2652]: E1101 00:22:33.661979 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.662214 kubelet[2652]: W1101 00:22:33.661986 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.662214 kubelet[2652]: E1101 00:22:33.661993 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.662524 kubelet[2652]: E1101 00:22:33.662389 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.662524 kubelet[2652]: W1101 00:22:33.662403 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.662524 kubelet[2652]: E1101 00:22:33.662410 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.662698 kubelet[2652]: E1101 00:22:33.662688 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.662698 kubelet[2652]: W1101 00:22:33.662696 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.662752 kubelet[2652]: E1101 00:22:33.662704 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.663108 kubelet[2652]: E1101 00:22:33.662913 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.663108 kubelet[2652]: W1101 00:22:33.662921 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.663108 kubelet[2652]: E1101 00:22:33.662929 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.663294 kubelet[2652]: E1101 00:22:33.663265 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.663294 kubelet[2652]: W1101 00:22:33.663274 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.663294 kubelet[2652]: E1101 00:22:33.663282 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663578 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.664120 kubelet[2652]: W1101 00:22:33.663587 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663595 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663766 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.664120 kubelet[2652]: W1101 00:22:33.663774 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663783 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663891 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.664120 kubelet[2652]: W1101 00:22:33.663897 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.664120 kubelet[2652]: E1101 00:22:33.663903 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.666650 kubelet[2652]: E1101 00:22:33.666436 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.666650 kubelet[2652]: W1101 00:22:33.666457 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.666650 kubelet[2652]: E1101 00:22:33.666472 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.666650 kubelet[2652]: I1101 00:22:33.666506 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ae9e8348-8b23-4471-92e0-30ed8445c882-socket-dir\") pod \"csi-node-driver-4lkfc\" (UID: \"ae9e8348-8b23-4471-92e0-30ed8445c882\") " pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:33.667694 kubelet[2652]: E1101 00:22:33.667561 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.667694 kubelet[2652]: W1101 00:22:33.667580 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.667694 kubelet[2652]: E1101 00:22:33.667601 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.667694 kubelet[2652]: I1101 00:22:33.667618 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ae9e8348-8b23-4471-92e0-30ed8445c882-varrun\") pod \"csi-node-driver-4lkfc\" (UID: \"ae9e8348-8b23-4471-92e0-30ed8445c882\") " pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:33.668079 kubelet[2652]: E1101 00:22:33.667777 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.668079 kubelet[2652]: W1101 00:22:33.667785 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.668079 kubelet[2652]: E1101 00:22:33.667836 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.668079 kubelet[2652]: E1101 00:22:33.667889 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.668079 kubelet[2652]: I1101 00:22:33.667890 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ae9e8348-8b23-4471-92e0-30ed8445c882-kubelet-dir\") pod \"csi-node-driver-4lkfc\" (UID: \"ae9e8348-8b23-4471-92e0-30ed8445c882\") " pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:33.668079 kubelet[2652]: W1101 00:22:33.667896 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.668079 kubelet[2652]: E1101 00:22:33.667905 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.668079 kubelet[2652]: E1101 00:22:33.668016 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.668079 kubelet[2652]: W1101 00:22:33.668022 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.668251 kubelet[2652]: E1101 00:22:33.668040 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.668251 kubelet[2652]: E1101 00:22:33.668164 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.668251 kubelet[2652]: W1101 00:22:33.668171 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.668251 kubelet[2652]: E1101 00:22:33.668178 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.670498 kubelet[2652]: E1101 00:22:33.670132 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.670498 kubelet[2652]: W1101 00:22:33.670145 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.670498 kubelet[2652]: E1101 00:22:33.670158 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.670498 kubelet[2652]: I1101 00:22:33.670185 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ae9e8348-8b23-4471-92e0-30ed8445c882-registration-dir\") pod \"csi-node-driver-4lkfc\" (UID: \"ae9e8348-8b23-4471-92e0-30ed8445c882\") " pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:33.672386 kubelet[2652]: E1101 00:22:33.672207 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.672386 kubelet[2652]: W1101 00:22:33.672222 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.672386 kubelet[2652]: E1101 00:22:33.672236 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.672386 kubelet[2652]: E1101 00:22:33.672365 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.672386 kubelet[2652]: W1101 00:22:33.672371 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.672386 kubelet[2652]: E1101 00:22:33.672379 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.672696 kubelet[2652]: E1101 00:22:33.672597 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.672696 kubelet[2652]: W1101 00:22:33.672606 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.672696 kubelet[2652]: E1101 00:22:33.672616 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.672939 kubelet[2652]: E1101 00:22:33.672780 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.672939 kubelet[2652]: W1101 00:22:33.672790 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.672939 kubelet[2652]: E1101 00:22:33.672807 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.672939 kubelet[2652]: I1101 00:22:33.672821 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxqgv\" (UniqueName: \"kubernetes.io/projected/ae9e8348-8b23-4471-92e0-30ed8445c882-kube-api-access-cxqgv\") pod \"csi-node-driver-4lkfc\" (UID: \"ae9e8348-8b23-4471-92e0-30ed8445c882\") " pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.672954 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.674468 kubelet[2652]: W1101 00:22:33.672962 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.672969 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.673134 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.674468 kubelet[2652]: W1101 00:22:33.673141 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.673147 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.673618 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.674468 kubelet[2652]: W1101 00:22:33.673634 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.673651 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.674468 kubelet[2652]: E1101 00:22:33.673922 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.674706 kubelet[2652]: W1101 00:22:33.673930 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.674706 kubelet[2652]: E1101 00:22:33.673955 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.679288 containerd[1505]: time="2025-11-01T00:22:33.678360765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fd89fc4fc-z44pw,Uid:aec02d03-615c-4127-aa8a-461042156b02,Namespace:calico-system,Attempt:0,} returns sandbox id \"93c962afc9c60a9d3c399b3330e095c9fb8a800a6fa19c9787024ccd6abdfe29\"" Nov 1 00:22:33.684106 containerd[1505]: time="2025-11-01T00:22:33.683685760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzsh4,Uid:c65813c8-5c2c-4b95-a019-e0892fdbe2bf,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:33.686619 containerd[1505]: time="2025-11-01T00:22:33.686586038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:22:33.714771 containerd[1505]: time="2025-11-01T00:22:33.714304886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:33.714771 containerd[1505]: time="2025-11-01T00:22:33.714444430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:33.714771 containerd[1505]: time="2025-11-01T00:22:33.714482088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.715097 containerd[1505]: time="2025-11-01T00:22:33.714778173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:33.728799 systemd[1]: Started cri-containerd-000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154.scope - libcontainer container 000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154. Nov 1 00:22:33.761608 containerd[1505]: time="2025-11-01T00:22:33.761552640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lzsh4,Uid:c65813c8-5c2c-4b95-a019-e0892fdbe2bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\"" Nov 1 00:22:33.775279 kubelet[2652]: E1101 00:22:33.775234 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.775279 kubelet[2652]: W1101 00:22:33.775256 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.775279 kubelet[2652]: E1101 00:22:33.775273 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.775588 kubelet[2652]: E1101 00:22:33.775486 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.775588 kubelet[2652]: W1101 00:22:33.775494 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.775588 kubelet[2652]: E1101 00:22:33.775506 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.775745 kubelet[2652]: E1101 00:22:33.775717 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.775745 kubelet[2652]: W1101 00:22:33.775733 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.775745 kubelet[2652]: E1101 00:22:33.775744 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.775943 kubelet[2652]: E1101 00:22:33.775914 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.775943 kubelet[2652]: W1101 00:22:33.775921 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.775943 kubelet[2652]: E1101 00:22:33.775928 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.776120 kubelet[2652]: E1101 00:22:33.776096 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.776120 kubelet[2652]: W1101 00:22:33.776109 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.776120 kubelet[2652]: E1101 00:22:33.776117 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.776304 kubelet[2652]: E1101 00:22:33.776271 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.776304 kubelet[2652]: W1101 00:22:33.776284 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.776304 kubelet[2652]: E1101 00:22:33.776292 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.776431 kubelet[2652]: E1101 00:22:33.776409 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.776431 kubelet[2652]: W1101 00:22:33.776422 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.776431 kubelet[2652]: E1101 00:22:33.776428 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.776546 kubelet[2652]: E1101 00:22:33.776531 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.776546 kubelet[2652]: W1101 00:22:33.776542 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.776596 kubelet[2652]: E1101 00:22:33.776556 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.776774 kubelet[2652]: E1101 00:22:33.776757 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.776774 kubelet[2652]: W1101 00:22:33.776768 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.776885 kubelet[2652]: E1101 00:22:33.776860 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.777692 kubelet[2652]: E1101 00:22:33.777652 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.777692 kubelet[2652]: W1101 00:22:33.777682 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.777787 kubelet[2652]: E1101 00:22:33.777765 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.777898 kubelet[2652]: E1101 00:22:33.777877 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.777898 kubelet[2652]: W1101 00:22:33.777891 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.777992 kubelet[2652]: E1101 00:22:33.777970 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.778211 kubelet[2652]: E1101 00:22:33.778181 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.778211 kubelet[2652]: W1101 00:22:33.778194 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.778322 kubelet[2652]: E1101 00:22:33.778282 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.778633 kubelet[2652]: E1101 00:22:33.778611 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.778633 kubelet[2652]: W1101 00:22:33.778625 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.778891 kubelet[2652]: E1101 00:22:33.778867 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.779072 kubelet[2652]: E1101 00:22:33.779051 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.779072 kubelet[2652]: W1101 00:22:33.779064 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.779161 kubelet[2652]: E1101 00:22:33.779101 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.779289 kubelet[2652]: E1101 00:22:33.779250 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.779289 kubelet[2652]: W1101 00:22:33.779262 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.779338 kubelet[2652]: E1101 00:22:33.779314 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.779475 kubelet[2652]: E1101 00:22:33.779432 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.779475 kubelet[2652]: W1101 00:22:33.779445 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.779616 kubelet[2652]: E1101 00:22:33.779477 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.779658 kubelet[2652]: E1101 00:22:33.779648 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.779712 kubelet[2652]: W1101 00:22:33.779657 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.779821 kubelet[2652]: E1101 00:22:33.779789 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.780587 kubelet[2652]: E1101 00:22:33.780564 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.780587 kubelet[2652]: W1101 00:22:33.780578 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.780692 kubelet[2652]: E1101 00:22:33.780589 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.780937 kubelet[2652]: E1101 00:22:33.780909 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.780937 kubelet[2652]: W1101 00:22:33.780923 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.780937 kubelet[2652]: E1101 00:22:33.780931 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.781109 kubelet[2652]: E1101 00:22:33.781080 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.781109 kubelet[2652]: W1101 00:22:33.781103 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.781161 kubelet[2652]: E1101 00:22:33.781122 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.781436 kubelet[2652]: E1101 00:22:33.781405 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.781436 kubelet[2652]: W1101 00:22:33.781419 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.781587 kubelet[2652]: E1101 00:22:33.781556 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.782245 kubelet[2652]: E1101 00:22:33.782221 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.782245 kubelet[2652]: W1101 00:22:33.782237 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.782432 kubelet[2652]: E1101 00:22:33.782409 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.782625 kubelet[2652]: E1101 00:22:33.782604 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.782625 kubelet[2652]: W1101 00:22:33.782618 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.782771 kubelet[2652]: E1101 00:22:33.782757 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.783131 kubelet[2652]: E1101 00:22:33.783110 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.783131 kubelet[2652]: W1101 00:22:33.783123 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.783981 kubelet[2652]: E1101 00:22:33.783959 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.784121 kubelet[2652]: E1101 00:22:33.784079 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.784121 kubelet[2652]: W1101 00:22:33.784104 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.784121 kubelet[2652]: E1101 00:22:33.784113 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:33.787619 kubelet[2652]: E1101 00:22:33.787593 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:33.787619 kubelet[2652]: W1101 00:22:33.787610 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:33.787619 kubelet[2652]: E1101 00:22:33.787621 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:35.027075 kubelet[2652]: E1101 00:22:35.027009 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:35.589156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748866079.mount: Deactivated successfully. Nov 1 00:22:36.119169 containerd[1505]: time="2025-11-01T00:22:36.119123047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.120446 containerd[1505]: time="2025-11-01T00:22:36.120394912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:22:36.121553 containerd[1505]: time="2025-11-01T00:22:36.121485641Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.134932 containerd[1505]: time="2025-11-01T00:22:36.134876427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:36.135643 containerd[1505]: time="2025-11-01T00:22:36.135367415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.448747138s" Nov 1 00:22:36.135643 containerd[1505]: time="2025-11-01T00:22:36.135404811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:22:36.140350 containerd[1505]: time="2025-11-01T00:22:36.140179450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:22:36.162807 containerd[1505]: time="2025-11-01T00:22:36.162774339Z" level=info msg="CreateContainer within sandbox \"93c962afc9c60a9d3c399b3330e095c9fb8a800a6fa19c9787024ccd6abdfe29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:22:36.181527 containerd[1505]: time="2025-11-01T00:22:36.181451445Z" level=info msg="CreateContainer within sandbox \"93c962afc9c60a9d3c399b3330e095c9fb8a800a6fa19c9787024ccd6abdfe29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9aeef2c374d28ada6baa5c860a6c3d0604cae2226baf33e4005d6ffb4db2ec27\"" Nov 1 00:22:36.183925 containerd[1505]: time="2025-11-01T00:22:36.182113901Z" level=info msg="StartContainer for \"9aeef2c374d28ada6baa5c860a6c3d0604cae2226baf33e4005d6ffb4db2ec27\"" Nov 1 00:22:36.215627 systemd[1]: Started cri-containerd-9aeef2c374d28ada6baa5c860a6c3d0604cae2226baf33e4005d6ffb4db2ec27.scope - libcontainer container 9aeef2c374d28ada6baa5c860a6c3d0604cae2226baf33e4005d6ffb4db2ec27. Nov 1 00:22:36.263932 containerd[1505]: time="2025-11-01T00:22:36.263723273Z" level=info msg="StartContainer for \"9aeef2c374d28ada6baa5c860a6c3d0604cae2226baf33e4005d6ffb4db2ec27\" returns successfully" Nov 1 00:22:36.537450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381637115.mount: Deactivated successfully. Nov 1 00:22:37.027541 kubelet[2652]: E1101 00:22:37.027458 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:37.198237 kubelet[2652]: E1101 00:22:37.198051 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.198237 kubelet[2652]: W1101 00:22:37.198091 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.198237 kubelet[2652]: E1101 00:22:37.198123 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.199263 kubelet[2652]: E1101 00:22:37.198714 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.199263 kubelet[2652]: W1101 00:22:37.198733 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.199263 kubelet[2652]: E1101 00:22:37.198751 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.199887 kubelet[2652]: E1101 00:22:37.199606 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.199887 kubelet[2652]: W1101 00:22:37.199626 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.199887 kubelet[2652]: E1101 00:22:37.199689 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200186 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.201241 kubelet[2652]: W1101 00:22:37.200202 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200219 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200464 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.201241 kubelet[2652]: W1101 00:22:37.200477 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200491 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200773 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.201241 kubelet[2652]: W1101 00:22:37.200788 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.201241 kubelet[2652]: E1101 00:22:37.200807 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.202993 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.204496 kubelet[2652]: W1101 00:22:37.203010 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203028 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203259 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.204496 kubelet[2652]: W1101 00:22:37.203272 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203288 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203515 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.204496 kubelet[2652]: W1101 00:22:37.203527 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203541 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.204496 kubelet[2652]: E1101 00:22:37.203824 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.205088 kubelet[2652]: W1101 00:22:37.203837 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.205088 kubelet[2652]: E1101 00:22:37.203851 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.205088 kubelet[2652]: E1101 00:22:37.204352 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.205088 kubelet[2652]: W1101 00:22:37.204368 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.205088 kubelet[2652]: E1101 00:22:37.204385 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.205632 kubelet[2652]: E1101 00:22:37.205367 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.205632 kubelet[2652]: W1101 00:22:37.205390 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.205632 kubelet[2652]: E1101 00:22:37.205412 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.206184 kubelet[2652]: E1101 00:22:37.206031 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.206184 kubelet[2652]: W1101 00:22:37.206054 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.206184 kubelet[2652]: E1101 00:22:37.206071 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.206564 kubelet[2652]: E1101 00:22:37.206394 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.206564 kubelet[2652]: W1101 00:22:37.206408 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.206564 kubelet[2652]: E1101 00:22:37.206423 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.206946 kubelet[2652]: E1101 00:22:37.206927 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.207329 kubelet[2652]: W1101 00:22:37.207038 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.207329 kubelet[2652]: E1101 00:22:37.207060 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.207743 kubelet[2652]: E1101 00:22:37.207631 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.208267 kubelet[2652]: W1101 00:22:37.207846 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.208267 kubelet[2652]: E1101 00:22:37.207920 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.208441 kubelet[2652]: E1101 00:22:37.208422 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.208591 kubelet[2652]: W1101 00:22:37.208520 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.208591 kubelet[2652]: E1101 00:22:37.208553 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.208886 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.210870 kubelet[2652]: W1101 00:22:37.208900 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.208917 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.209168 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.210870 kubelet[2652]: W1101 00:22:37.209181 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.209196 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.209393 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.210870 kubelet[2652]: W1101 00:22:37.209405 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.209418 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.210870 kubelet[2652]: E1101 00:22:37.209610 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211423 kubelet[2652]: W1101 00:22:37.209622 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.209635 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.210009 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211423 kubelet[2652]: W1101 00:22:37.210024 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.210042 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.210744 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211423 kubelet[2652]: W1101 00:22:37.210762 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.210777 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211423 kubelet[2652]: E1101 00:22:37.211021 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211423 kubelet[2652]: W1101 00:22:37.211034 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211048 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211253 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211959 kubelet[2652]: W1101 00:22:37.211268 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211281 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211486 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211959 kubelet[2652]: W1101 00:22:37.211499 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211512 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211815 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.211959 kubelet[2652]: W1101 00:22:37.211828 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.211959 kubelet[2652]: E1101 00:22:37.211842 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.212365 kubelet[2652]: E1101 00:22:37.212285 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.212365 kubelet[2652]: W1101 00:22:37.212299 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.212365 kubelet[2652]: E1101 00:22:37.212314 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.212602 kubelet[2652]: E1101 00:22:37.212533 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.212602 kubelet[2652]: W1101 00:22:37.212554 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.212602 kubelet[2652]: E1101 00:22:37.212570 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.212887 kubelet[2652]: E1101 00:22:37.212856 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.212887 kubelet[2652]: W1101 00:22:37.212885 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.212982 kubelet[2652]: E1101 00:22:37.212900 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.213138 kubelet[2652]: E1101 00:22:37.213107 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.213138 kubelet[2652]: W1101 00:22:37.213132 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.213238 kubelet[2652]: E1101 00:22:37.213146 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.213406 kubelet[2652]: E1101 00:22:37.213375 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.213406 kubelet[2652]: W1101 00:22:37.213398 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.213509 kubelet[2652]: E1101 00:22:37.213413 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.214111 kubelet[2652]: E1101 00:22:37.213894 2652 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:37.214111 kubelet[2652]: W1101 00:22:37.213917 2652 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:37.214111 kubelet[2652]: E1101 00:22:37.213933 2652 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:37.883588 containerd[1505]: time="2025-11-01T00:22:37.883542700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:37.885357 containerd[1505]: time="2025-11-01T00:22:37.885091954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:22:37.886539 containerd[1505]: time="2025-11-01T00:22:37.886448358Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:37.890155 containerd[1505]: time="2025-11-01T00:22:37.890117166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:37.891640 containerd[1505]: time="2025-11-01T00:22:37.891085963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.750867836s" Nov 1 00:22:37.891640 containerd[1505]: time="2025-11-01T00:22:37.891137408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:22:37.895534 containerd[1505]: time="2025-11-01T00:22:37.895472458Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:22:37.916278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898064684.mount: Deactivated successfully. Nov 1 00:22:37.920170 containerd[1505]: time="2025-11-01T00:22:37.920107950Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce\"" Nov 1 00:22:37.921090 containerd[1505]: time="2025-11-01T00:22:37.921052248Z" level=info msg="StartContainer for \"8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce\"" Nov 1 00:22:37.984493 systemd[1]: run-containerd-runc-k8s.io-8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce-runc.ApJkCC.mount: Deactivated successfully. Nov 1 00:22:37.995883 systemd[1]: Started cri-containerd-8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce.scope - libcontainer container 8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce. Nov 1 00:22:38.037313 containerd[1505]: time="2025-11-01T00:22:38.037150038Z" level=info msg="StartContainer for \"8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce\" returns successfully" Nov 1 00:22:38.050209 systemd[1]: cri-containerd-8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce.scope: Deactivated successfully. Nov 1 00:22:38.107073 containerd[1505]: time="2025-11-01T00:22:38.091944705Z" level=info msg="shim disconnected" id=8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce namespace=k8s.io Nov 1 00:22:38.107073 containerd[1505]: time="2025-11-01T00:22:38.106869369Z" level=warning msg="cleaning up after shim disconnected" id=8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce namespace=k8s.io Nov 1 00:22:38.107073 containerd[1505]: time="2025-11-01T00:22:38.106891965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:22:38.180863 kubelet[2652]: I1101 00:22:38.179374 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:38.182413 containerd[1505]: time="2025-11-01T00:22:38.181927703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:22:38.215624 kubelet[2652]: I1101 00:22:38.215503 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fd89fc4fc-z44pw" podStartSLOduration=2.764733577 podStartE2EDuration="5.215479957s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="2025-11-01 00:22:33.685852108 +0000 UTC m=+21.825364951" lastFinishedPulling="2025-11-01 00:22:36.136598489 +0000 UTC m=+24.276111331" observedRunningTime="2025-11-01 00:22:37.232269307 +0000 UTC m=+25.371782180" watchObservedRunningTime="2025-11-01 00:22:38.215479957 +0000 UTC m=+26.354992841" Nov 1 00:22:38.909316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8682eb99e30544a5ce5a13b31f520f6de0b1d66b3b5bbcab5b50a8d3fee4ecce-rootfs.mount: Deactivated successfully. Nov 1 00:22:39.026867 kubelet[2652]: E1101 00:22:39.026769 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:40.948095 containerd[1505]: time="2025-11-01T00:22:40.948006805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:40.949695 containerd[1505]: time="2025-11-01T00:22:40.949184686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:22:40.950846 containerd[1505]: time="2025-11-01T00:22:40.950800704Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:40.954342 containerd[1505]: time="2025-11-01T00:22:40.954287714Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:40.956024 containerd[1505]: time="2025-11-01T00:22:40.955822927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.773844862s" Nov 1 00:22:40.956024 containerd[1505]: time="2025-11-01T00:22:40.955885133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:22:40.958970 containerd[1505]: time="2025-11-01T00:22:40.958932264Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:22:40.978983 containerd[1505]: time="2025-11-01T00:22:40.978908351Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df\"" Nov 1 00:22:40.980096 containerd[1505]: time="2025-11-01T00:22:40.980004588Z" level=info msg="StartContainer for \"a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df\"" Nov 1 00:22:41.015326 systemd[1]: run-containerd-runc-k8s.io-a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df-runc.pbnRvy.mount: Deactivated successfully. Nov 1 00:22:41.021783 systemd[1]: Started cri-containerd-a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df.scope - libcontainer container a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df. Nov 1 00:22:41.026563 kubelet[2652]: E1101 00:22:41.026533 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:41.056895 containerd[1505]: time="2025-11-01T00:22:41.056835135Z" level=info msg="StartContainer for \"a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df\" returns successfully" Nov 1 00:22:41.600531 systemd[1]: cri-containerd-a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df.scope: Deactivated successfully. Nov 1 00:22:41.636031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df-rootfs.mount: Deactivated successfully. Nov 1 00:22:41.644453 containerd[1505]: time="2025-11-01T00:22:41.644397579Z" level=info msg="shim disconnected" id=a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df namespace=k8s.io Nov 1 00:22:41.644587 containerd[1505]: time="2025-11-01T00:22:41.644548293Z" level=warning msg="cleaning up after shim disconnected" id=a25944cc4b36a88512d4528554b7b2c2d23cb491837be6404d713be9bfa2c0df namespace=k8s.io Nov 1 00:22:41.644587 containerd[1505]: time="2025-11-01T00:22:41.644564957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:22:41.660058 containerd[1505]: time="2025-11-01T00:22:41.659925215Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:22:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:22:41.670996 kubelet[2652]: I1101 00:22:41.669597 2652 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:22:41.724995 systemd[1]: Created slice kubepods-burstable-pod24c1a2ed_5b74_4228_b907_6de81bcc9c41.slice - libcontainer container kubepods-burstable-pod24c1a2ed_5b74_4228_b907_6de81bcc9c41.slice. Nov 1 00:22:41.737545 kubelet[2652]: W1101 00:22:41.737234 2652 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-6-n-a2a464dc28" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-6-n-a2a464dc28' and this object Nov 1 00:22:41.742802 systemd[1]: Created slice kubepods-besteffort-pod1f231fcb_ab47_4501_8198_d40b1b2412b1.slice - libcontainer container kubepods-besteffort-pod1f231fcb_ab47_4501_8198_d40b1b2412b1.slice. Nov 1 00:22:41.743963 kubelet[2652]: E1101 00:22:41.743802 2652 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-6-n-a2a464dc28\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-a2a464dc28' and this object" logger="UnhandledError" Nov 1 00:22:41.749807 kubelet[2652]: I1101 00:22:41.749543 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dn4x\" (UniqueName: \"kubernetes.io/projected/fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e-kube-api-access-2dn4x\") pod \"calico-apiserver-7b55fd6955-6t7nj\" (UID: \"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e\") " pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" Nov 1 00:22:41.750305 kubelet[2652]: I1101 00:22:41.750174 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24c1a2ed-5b74-4228-b907-6de81bcc9c41-config-volume\") pod \"coredns-668d6bf9bc-gpnbt\" (UID: \"24c1a2ed-5b74-4228-b907-6de81bcc9c41\") " pod="kube-system/coredns-668d6bf9bc-gpnbt" Nov 1 00:22:41.750305 kubelet[2652]: I1101 00:22:41.750199 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-backend-key-pair\") pod \"whisker-566ff8b8b7-gwg5w\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " pod="calico-system/whisker-566ff8b8b7-gwg5w" Nov 1 00:22:41.750305 kubelet[2652]: I1101 00:22:41.750224 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bfe0f66-8e86-4d9f-b0e9-32499fee7221-calico-apiserver-certs\") pod \"calico-apiserver-7b55fd6955-lwt2w\" (UID: \"5bfe0f66-8e86-4d9f-b0e9-32499fee7221\") " pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" Nov 1 00:22:41.750607 kubelet[2652]: I1101 00:22:41.750356 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9d4fd33c-57a2-484f-b033-ef3d888b08dc-goldmane-key-pair\") pod \"goldmane-666569f655-62wdq\" (UID: \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\") " pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:41.750607 kubelet[2652]: I1101 00:22:41.750384 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca02817f-7150-4fe5-a77c-3db57eb2bbb9-config-volume\") pod \"coredns-668d6bf9bc-6rqgg\" (UID: \"ca02817f-7150-4fe5-a77c-3db57eb2bbb9\") " pod="kube-system/coredns-668d6bf9bc-6rqgg" Nov 1 00:22:41.752037 kubelet[2652]: I1101 00:22:41.750403 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d4fd33c-57a2-484f-b033-ef3d888b08dc-goldmane-ca-bundle\") pod \"goldmane-666569f655-62wdq\" (UID: \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\") " pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:41.752037 kubelet[2652]: I1101 00:22:41.750819 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2hlz\" (UniqueName: \"kubernetes.io/projected/98898523-1f05-472a-90a7-fe467ee6a22e-kube-api-access-z2hlz\") pod \"calico-kube-controllers-6d9dfb6c85-btn4p\" (UID: \"98898523-1f05-472a-90a7-fe467ee6a22e\") " pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" Nov 1 00:22:41.752037 kubelet[2652]: I1101 00:22:41.750846 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72959\" (UniqueName: \"kubernetes.io/projected/ca02817f-7150-4fe5-a77c-3db57eb2bbb9-kube-api-access-72959\") pod \"coredns-668d6bf9bc-6rqgg\" (UID: \"ca02817f-7150-4fe5-a77c-3db57eb2bbb9\") " pod="kube-system/coredns-668d6bf9bc-6rqgg" Nov 1 00:22:41.752037 kubelet[2652]: I1101 00:22:41.750868 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e-calico-apiserver-certs\") pod \"calico-apiserver-7b55fd6955-6t7nj\" (UID: \"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e\") " pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" Nov 1 00:22:41.752037 kubelet[2652]: I1101 00:22:41.750888 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4fd33c-57a2-484f-b033-ef3d888b08dc-config\") pod \"goldmane-666569f655-62wdq\" (UID: \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\") " pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:41.752215 kubelet[2652]: I1101 00:22:41.750908 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-ca-bundle\") pod \"whisker-566ff8b8b7-gwg5w\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " pod="calico-system/whisker-566ff8b8b7-gwg5w" Nov 1 00:22:41.752215 kubelet[2652]: I1101 00:22:41.750936 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45bx5\" (UniqueName: \"kubernetes.io/projected/5bfe0f66-8e86-4d9f-b0e9-32499fee7221-kube-api-access-45bx5\") pod \"calico-apiserver-7b55fd6955-lwt2w\" (UID: \"5bfe0f66-8e86-4d9f-b0e9-32499fee7221\") " pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" Nov 1 00:22:41.752215 kubelet[2652]: I1101 00:22:41.750957 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9xpk\" (UniqueName: \"kubernetes.io/projected/24c1a2ed-5b74-4228-b907-6de81bcc9c41-kube-api-access-g9xpk\") pod \"coredns-668d6bf9bc-gpnbt\" (UID: \"24c1a2ed-5b74-4228-b907-6de81bcc9c41\") " pod="kube-system/coredns-668d6bf9bc-gpnbt" Nov 1 00:22:41.752215 kubelet[2652]: I1101 00:22:41.750981 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqq5h\" (UniqueName: \"kubernetes.io/projected/9d4fd33c-57a2-484f-b033-ef3d888b08dc-kube-api-access-jqq5h\") pod \"goldmane-666569f655-62wdq\" (UID: \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\") " pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:41.752215 kubelet[2652]: I1101 00:22:41.751001 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98898523-1f05-472a-90a7-fe467ee6a22e-tigera-ca-bundle\") pod \"calico-kube-controllers-6d9dfb6c85-btn4p\" (UID: \"98898523-1f05-472a-90a7-fe467ee6a22e\") " pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" Nov 1 00:22:41.752387 kubelet[2652]: I1101 00:22:41.751020 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtnl\" (UniqueName: \"kubernetes.io/projected/1f231fcb-ab47-4501-8198-d40b1b2412b1-kube-api-access-hqtnl\") pod \"whisker-566ff8b8b7-gwg5w\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " pod="calico-system/whisker-566ff8b8b7-gwg5w" Nov 1 00:22:41.754849 systemd[1]: Created slice kubepods-besteffort-podfb9d770f_45bf_4ea7_b239_8b2dc1a69c6e.slice - libcontainer container kubepods-besteffort-podfb9d770f_45bf_4ea7_b239_8b2dc1a69c6e.slice. Nov 1 00:22:41.762972 systemd[1]: Created slice kubepods-besteffort-pod98898523_1f05_472a_90a7_fe467ee6a22e.slice - libcontainer container kubepods-besteffort-pod98898523_1f05_472a_90a7_fe467ee6a22e.slice. Nov 1 00:22:41.770102 systemd[1]: Created slice kubepods-besteffort-pod9d4fd33c_57a2_484f_b033_ef3d888b08dc.slice - libcontainer container kubepods-besteffort-pod9d4fd33c_57a2_484f_b033_ef3d888b08dc.slice. Nov 1 00:22:41.779013 systemd[1]: Created slice kubepods-burstable-podca02817f_7150_4fe5_a77c_3db57eb2bbb9.slice - libcontainer container kubepods-burstable-podca02817f_7150_4fe5_a77c_3db57eb2bbb9.slice. Nov 1 00:22:41.783634 systemd[1]: Created slice kubepods-besteffort-pod5bfe0f66_8e86_4d9f_b0e9_32499fee7221.slice - libcontainer container kubepods-besteffort-pod5bfe0f66_8e86_4d9f_b0e9_32499fee7221.slice. Nov 1 00:22:42.050083 containerd[1505]: time="2025-11-01T00:22:42.049780819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566ff8b8b7-gwg5w,Uid:1f231fcb-ab47-4501-8198-d40b1b2412b1,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:42.062900 containerd[1505]: time="2025-11-01T00:22:42.060230757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-6t7nj,Uid:fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:22:42.068323 containerd[1505]: time="2025-11-01T00:22:42.068248209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9dfb6c85-btn4p,Uid:98898523-1f05-472a-90a7-fe467ee6a22e,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:42.077702 containerd[1505]: time="2025-11-01T00:22:42.077604892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-62wdq,Uid:9d4fd33c-57a2-484f-b033-ef3d888b08dc,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:42.088040 containerd[1505]: time="2025-11-01T00:22:42.087535182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-lwt2w,Uid:5bfe0f66-8e86-4d9f-b0e9-32499fee7221,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:22:42.306990 containerd[1505]: time="2025-11-01T00:22:42.306657628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:22:42.424574 containerd[1505]: time="2025-11-01T00:22:42.424441577Z" level=error msg="Failed to destroy network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.430619 containerd[1505]: time="2025-11-01T00:22:42.430554446Z" level=error msg="Failed to destroy network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.431111 containerd[1505]: time="2025-11-01T00:22:42.430942750Z" level=error msg="encountered an error cleaning up failed sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.431111 containerd[1505]: time="2025-11-01T00:22:42.430999474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-62wdq,Uid:9d4fd33c-57a2-484f-b033-ef3d888b08dc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.433994 containerd[1505]: time="2025-11-01T00:22:42.433856337Z" level=error msg="encountered an error cleaning up failed sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.433994 containerd[1505]: time="2025-11-01T00:22:42.433926148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9dfb6c85-btn4p,Uid:98898523-1f05-472a-90a7-fe467ee6a22e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.438927 containerd[1505]: time="2025-11-01T00:22:42.438902345Z" level=error msg="Failed to destroy network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.439282 containerd[1505]: time="2025-11-01T00:22:42.439263934Z" level=error msg="encountered an error cleaning up failed sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.439552 containerd[1505]: time="2025-11-01T00:22:42.439533608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-lwt2w,Uid:5bfe0f66-8e86-4d9f-b0e9-32499fee7221,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.440361 kubelet[2652]: E1101 00:22:42.439847 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.440361 kubelet[2652]: E1101 00:22:42.439912 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" Nov 1 00:22:42.440361 kubelet[2652]: E1101 00:22:42.439931 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" Nov 1 00:22:42.440706 kubelet[2652]: E1101 00:22:42.439976 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:22:42.440706 kubelet[2652]: E1101 00:22:42.440020 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.440706 kubelet[2652]: E1101 00:22:42.440033 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:42.441395 kubelet[2652]: E1101 00:22:42.440047 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-62wdq" Nov 1 00:22:42.441395 kubelet[2652]: E1101 00:22:42.440065 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:22:42.441395 kubelet[2652]: E1101 00:22:42.440090 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.441827 kubelet[2652]: E1101 00:22:42.440103 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" Nov 1 00:22:42.441827 kubelet[2652]: E1101 00:22:42.440112 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" Nov 1 00:22:42.441827 kubelet[2652]: E1101 00:22:42.440130 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:22:42.444342 containerd[1505]: time="2025-11-01T00:22:42.444194429Z" level=error msg="Failed to destroy network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.445313 containerd[1505]: time="2025-11-01T00:22:42.445180077Z" level=error msg="encountered an error cleaning up failed sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.445313 containerd[1505]: time="2025-11-01T00:22:42.445265721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566ff8b8b7-gwg5w,Uid:1f231fcb-ab47-4501-8198-d40b1b2412b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.446055 kubelet[2652]: E1101 00:22:42.446026 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.446360 kubelet[2652]: E1101 00:22:42.446157 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566ff8b8b7-gwg5w" Nov 1 00:22:42.446360 kubelet[2652]: E1101 00:22:42.446176 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566ff8b8b7-gwg5w" Nov 1 00:22:42.446360 kubelet[2652]: E1101 00:22:42.446296 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-566ff8b8b7-gwg5w_calico-system(1f231fcb-ab47-4501-8198-d40b1b2412b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-566ff8b8b7-gwg5w_calico-system(1f231fcb-ab47-4501-8198-d40b1b2412b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-566ff8b8b7-gwg5w" podUID="1f231fcb-ab47-4501-8198-d40b1b2412b1" Nov 1 00:22:42.447146 containerd[1505]: time="2025-11-01T00:22:42.447112686Z" level=error msg="Failed to destroy network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.447365 containerd[1505]: time="2025-11-01T00:22:42.447333200Z" level=error msg="encountered an error cleaning up failed sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.447398 containerd[1505]: time="2025-11-01T00:22:42.447370887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-6t7nj,Uid:fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.447621 kubelet[2652]: E1101 00:22:42.447553 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:42.447621 kubelet[2652]: E1101 00:22:42.447608 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" Nov 1 00:22:42.447621 kubelet[2652]: E1101 00:22:42.447624 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" Nov 1 00:22:42.448267 kubelet[2652]: E1101 00:22:42.447653 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:22:42.856984 kubelet[2652]: E1101 00:22:42.856887 2652 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:22:42.857388 kubelet[2652]: E1101 00:22:42.857062 2652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca02817f-7150-4fe5-a77c-3db57eb2bbb9-config-volume podName:ca02817f-7150-4fe5-a77c-3db57eb2bbb9 nodeName:}" failed. No retries permitted until 2025-11-01 00:22:43.357024817 +0000 UTC m=+31.496537701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/ca02817f-7150-4fe5-a77c-3db57eb2bbb9-config-volume") pod "coredns-668d6bf9bc-6rqgg" (UID: "ca02817f-7150-4fe5-a77c-3db57eb2bbb9") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:22:42.860136 kubelet[2652]: E1101 00:22:42.860073 2652 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:22:42.860258 kubelet[2652]: E1101 00:22:42.860168 2652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/24c1a2ed-5b74-4228-b907-6de81bcc9c41-config-volume podName:24c1a2ed-5b74-4228-b907-6de81bcc9c41 nodeName:}" failed. No retries permitted until 2025-11-01 00:22:43.360140803 +0000 UTC m=+31.499653686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/24c1a2ed-5b74-4228-b907-6de81bcc9c41-config-volume") pod "coredns-668d6bf9bc-gpnbt" (UID: "24c1a2ed-5b74-4228-b907-6de81bcc9c41") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:22:42.980384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47-shm.mount: Deactivated successfully. Nov 1 00:22:42.980615 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7-shm.mount: Deactivated successfully. Nov 1 00:22:42.980792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc-shm.mount: Deactivated successfully. Nov 1 00:22:43.037973 systemd[1]: Created slice kubepods-besteffort-podae9e8348_8b23_4471_92e0_30ed8445c882.slice - libcontainer container kubepods-besteffort-podae9e8348_8b23_4471_92e0_30ed8445c882.slice. Nov 1 00:22:43.042989 containerd[1505]: time="2025-11-01T00:22:43.042901414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lkfc,Uid:ae9e8348-8b23-4471-92e0-30ed8445c882,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:43.150724 containerd[1505]: time="2025-11-01T00:22:43.148897228Z" level=error msg="Failed to destroy network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.150724 containerd[1505]: time="2025-11-01T00:22:43.150465539Z" level=error msg="encountered an error cleaning up failed sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.150724 containerd[1505]: time="2025-11-01T00:22:43.150584769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lkfc,Uid:ae9e8348-8b23-4471-92e0-30ed8445c882,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.151600 kubelet[2652]: E1101 00:22:43.150930 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.151600 kubelet[2652]: E1101 00:22:43.150997 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:43.151600 kubelet[2652]: E1101 00:22:43.151025 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4lkfc" Nov 1 00:22:43.151888 kubelet[2652]: E1101 00:22:43.151072 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:43.153112 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665-shm.mount: Deactivated successfully. Nov 1 00:22:43.301064 kubelet[2652]: I1101 00:22:43.300404 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:43.305374 kubelet[2652]: I1101 00:22:43.305360 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:43.309498 kubelet[2652]: I1101 00:22:43.309486 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:43.312149 kubelet[2652]: I1101 00:22:43.312125 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:43.317050 kubelet[2652]: I1101 00:22:43.317005 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:43.321958 kubelet[2652]: I1101 00:22:43.321926 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:43.343072 containerd[1505]: time="2025-11-01T00:22:43.342647714Z" level=info msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" Nov 1 00:22:43.346565 containerd[1505]: time="2025-11-01T00:22:43.346516959Z" level=info msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" Nov 1 00:22:43.346939 containerd[1505]: time="2025-11-01T00:22:43.346878298Z" level=info msg="Ensure that sandbox 0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47 in task-service has been cleanup successfully" Nov 1 00:22:43.348683 containerd[1505]: time="2025-11-01T00:22:43.348494735Z" level=info msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" Nov 1 00:22:43.349227 containerd[1505]: time="2025-11-01T00:22:43.349190175Z" level=info msg="Ensure that sandbox 42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204 in task-service has been cleanup successfully" Nov 1 00:22:43.352845 containerd[1505]: time="2025-11-01T00:22:43.349611213Z" level=info msg="Ensure that sandbox 77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665 in task-service has been cleanup successfully" Nov 1 00:22:43.354296 containerd[1505]: time="2025-11-01T00:22:43.349617306Z" level=info msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" Nov 1 00:22:43.357614 containerd[1505]: time="2025-11-01T00:22:43.350039647Z" level=info msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" Nov 1 00:22:43.358368 containerd[1505]: time="2025-11-01T00:22:43.358331835Z" level=info msg="Ensure that sandbox 235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc in task-service has been cleanup successfully" Nov 1 00:22:43.359302 containerd[1505]: time="2025-11-01T00:22:43.358850701Z" level=info msg="Ensure that sandbox d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042 in task-service has been cleanup successfully" Nov 1 00:22:43.362160 containerd[1505]: time="2025-11-01T00:22:43.350241034Z" level=info msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" Nov 1 00:22:43.362428 containerd[1505]: time="2025-11-01T00:22:43.362384923Z" level=info msg="Ensure that sandbox 3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7 in task-service has been cleanup successfully" Nov 1 00:22:43.436862 containerd[1505]: time="2025-11-01T00:22:43.436611013Z" level=error msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" failed" error="failed to destroy network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.438268 kubelet[2652]: E1101 00:22:43.437887 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:43.442001 containerd[1505]: time="2025-11-01T00:22:43.441860230Z" level=error msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" failed" error="failed to destroy network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.442268 kubelet[2652]: E1101 00:22:43.442211 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:43.443359 containerd[1505]: time="2025-11-01T00:22:43.443316203Z" level=error msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" failed" error="failed to destroy network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.446799 containerd[1505]: time="2025-11-01T00:22:43.446748739Z" level=error msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" failed" error="failed to destroy network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.450898 containerd[1505]: time="2025-11-01T00:22:43.450847709Z" level=error msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" failed" error="failed to destroy network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.451033 containerd[1505]: time="2025-11-01T00:22:43.450996018Z" level=error msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" failed" error="failed to destroy network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.456014 kubelet[2652]: E1101 00:22:43.438138 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204"} Nov 1 00:22:43.456014 kubelet[2652]: E1101 00:22:43.455692 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98898523-1f05-472a-90a7-fe467ee6a22e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.456014 kubelet[2652]: E1101 00:22:43.455732 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98898523-1f05-472a-90a7-fe467ee6a22e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:22:43.456014 kubelet[2652]: E1101 00:22:43.442257 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665"} Nov 1 00:22:43.456791 kubelet[2652]: E1101 00:22:43.455784 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae9e8348-8b23-4471-92e0-30ed8445c882\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.456791 kubelet[2652]: E1101 00:22:43.455805 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae9e8348-8b23-4471-92e0-30ed8445c882\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:43.456791 kubelet[2652]: E1101 00:22:43.455816 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:43.456893 kubelet[2652]: E1101 00:22:43.455841 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:43.456893 kubelet[2652]: E1101 00:22:43.455865 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7"} Nov 1 00:22:43.456893 kubelet[2652]: E1101 00:22:43.455864 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042"} Nov 1 00:22:43.456893 kubelet[2652]: E1101 00:22:43.455884 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.456893 kubelet[2652]: E1101 00:22:43.455897 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.457019 kubelet[2652]: E1101 00:22:43.455901 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:22:43.457019 kubelet[2652]: E1101 00:22:43.455926 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d4fd33c-57a2-484f-b033-ef3d888b08dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:22:43.457019 kubelet[2652]: E1101 00:22:43.455942 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:43.457019 kubelet[2652]: E1101 00:22:43.455961 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47"} Nov 1 00:22:43.457126 kubelet[2652]: E1101 00:22:43.455960 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:43.457126 kubelet[2652]: E1101 00:22:43.455977 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5bfe0f66-8e86-4d9f-b0e9-32499fee7221\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.457126 kubelet[2652]: E1101 00:22:43.455982 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc"} Nov 1 00:22:43.457126 kubelet[2652]: E1101 00:22:43.455992 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5bfe0f66-8e86-4d9f-b0e9-32499fee7221\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:22:43.457234 kubelet[2652]: E1101 00:22:43.456207 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f231fcb-ab47-4501-8198-d40b1b2412b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:43.457234 kubelet[2652]: E1101 00:22:43.456225 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f231fcb-ab47-4501-8198-d40b1b2412b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-566ff8b8b7-gwg5w" podUID="1f231fcb-ab47-4501-8198-d40b1b2412b1" Nov 1 00:22:43.541326 containerd[1505]: time="2025-11-01T00:22:43.541244542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpnbt,Uid:24c1a2ed-5b74-4228-b907-6de81bcc9c41,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:43.584696 containerd[1505]: time="2025-11-01T00:22:43.583695698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rqgg,Uid:ca02817f-7150-4fe5-a77c-3db57eb2bbb9,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:43.636406 containerd[1505]: time="2025-11-01T00:22:43.636339352Z" level=error msg="Failed to destroy network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.637145 containerd[1505]: time="2025-11-01T00:22:43.637092140Z" level=error msg="encountered an error cleaning up failed sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.637318 containerd[1505]: time="2025-11-01T00:22:43.637181940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpnbt,Uid:24c1a2ed-5b74-4228-b907-6de81bcc9c41,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.639047 kubelet[2652]: E1101 00:22:43.637562 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.639047 kubelet[2652]: E1101 00:22:43.637692 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gpnbt" Nov 1 00:22:43.639047 kubelet[2652]: E1101 00:22:43.637724 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gpnbt" Nov 1 00:22:43.639215 kubelet[2652]: E1101 00:22:43.637776 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gpnbt_kube-system(24c1a2ed-5b74-4228-b907-6de81bcc9c41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gpnbt_kube-system(24c1a2ed-5b74-4228-b907-6de81bcc9c41)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gpnbt" podUID="24c1a2ed-5b74-4228-b907-6de81bcc9c41" Nov 1 00:22:43.668309 containerd[1505]: time="2025-11-01T00:22:43.668242335Z" level=error msg="Failed to destroy network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.668885 containerd[1505]: time="2025-11-01T00:22:43.668823776Z" level=error msg="encountered an error cleaning up failed sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.668945 containerd[1505]: time="2025-11-01T00:22:43.668908227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rqgg,Uid:ca02817f-7150-4fe5-a77c-3db57eb2bbb9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.670709 kubelet[2652]: E1101 00:22:43.669135 2652 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:43.670709 kubelet[2652]: E1101 00:22:43.669192 2652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rqgg" Nov 1 00:22:43.670709 kubelet[2652]: E1101 00:22:43.669223 2652 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6rqgg" Nov 1 00:22:43.670828 kubelet[2652]: E1101 00:22:43.669266 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6rqgg_kube-system(ca02817f-7150-4fe5-a77c-3db57eb2bbb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6rqgg_kube-system(ca02817f-7150-4fe5-a77c-3db57eb2bbb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rqgg" podUID="ca02817f-7150-4fe5-a77c-3db57eb2bbb9" Nov 1 00:22:43.976753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca-shm.mount: Deactivated successfully. Nov 1 00:22:43.976846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1-shm.mount: Deactivated successfully. Nov 1 00:22:44.327509 kubelet[2652]: I1101 00:22:44.327440 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:44.330629 containerd[1505]: time="2025-11-01T00:22:44.330230770Z" level=info msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" Nov 1 00:22:44.332619 containerd[1505]: time="2025-11-01T00:22:44.332068159Z" level=info msg="Ensure that sandbox 36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca in task-service has been cleanup successfully" Nov 1 00:22:44.335073 kubelet[2652]: I1101 00:22:44.334384 2652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:44.335940 containerd[1505]: time="2025-11-01T00:22:44.335558183Z" level=info msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" Nov 1 00:22:44.338250 containerd[1505]: time="2025-11-01T00:22:44.335903509Z" level=info msg="Ensure that sandbox 91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1 in task-service has been cleanup successfully" Nov 1 00:22:44.411287 containerd[1505]: time="2025-11-01T00:22:44.411242070Z" level=error msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" failed" error="failed to destroy network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:44.411500 kubelet[2652]: E1101 00:22:44.411450 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:44.411590 kubelet[2652]: E1101 00:22:44.411523 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1"} Nov 1 00:22:44.411590 kubelet[2652]: E1101 00:22:44.411563 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24c1a2ed-5b74-4228-b907-6de81bcc9c41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:44.411699 kubelet[2652]: E1101 00:22:44.411585 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24c1a2ed-5b74-4228-b907-6de81bcc9c41\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gpnbt" podUID="24c1a2ed-5b74-4228-b907-6de81bcc9c41" Nov 1 00:22:44.412466 containerd[1505]: time="2025-11-01T00:22:44.412443469Z" level=error msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" failed" error="failed to destroy network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:22:44.412703 kubelet[2652]: E1101 00:22:44.412603 2652 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:44.412703 kubelet[2652]: E1101 00:22:44.412637 2652 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca"} Nov 1 00:22:44.412839 kubelet[2652]: E1101 00:22:44.412658 2652 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ca02817f-7150-4fe5-a77c-3db57eb2bbb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:22:44.412839 kubelet[2652]: E1101 00:22:44.412773 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ca02817f-7150-4fe5-a77c-3db57eb2bbb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6rqgg" podUID="ca02817f-7150-4fe5-a77c-3db57eb2bbb9" Nov 1 00:22:46.459807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520277563.mount: Deactivated successfully. Nov 1 00:22:46.536716 containerd[1505]: time="2025-11-01T00:22:46.534710276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:22:46.549482 containerd[1505]: time="2025-11-01T00:22:46.549254810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.230899488s" Nov 1 00:22:46.549482 containerd[1505]: time="2025-11-01T00:22:46.549329160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:22:46.556009 containerd[1505]: time="2025-11-01T00:22:46.555925875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:46.603872 containerd[1505]: time="2025-11-01T00:22:46.603744697Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:46.604474 containerd[1505]: time="2025-11-01T00:22:46.604374874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:46.628870 containerd[1505]: time="2025-11-01T00:22:46.628780812Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:22:46.717457 containerd[1505]: time="2025-11-01T00:22:46.717318330Z" level=info msg="CreateContainer within sandbox \"000b8fdb0b40a47e64e5b19dfe2abf567ff09718786557801d0fbef09a975154\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23\"" Nov 1 00:22:46.722894 containerd[1505]: time="2025-11-01T00:22:46.722804623Z" level=info msg="StartContainer for \"2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23\"" Nov 1 00:22:46.867993 systemd[1]: Started cri-containerd-2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23.scope - libcontainer container 2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23. Nov 1 00:22:46.935065 containerd[1505]: time="2025-11-01T00:22:46.935005878Z" level=info msg="StartContainer for \"2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23\" returns successfully" Nov 1 00:22:47.053330 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:22:47.054731 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:22:47.284514 containerd[1505]: time="2025-11-01T00:22:47.284452842Z" level=info msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.357 [INFO][3844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.358 [INFO][3844] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" iface="eth0" netns="/var/run/netns/cni-2d90953d-fb03-2549-7c54-961b6f6847bf" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.359 [INFO][3844] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" iface="eth0" netns="/var/run/netns/cni-2d90953d-fb03-2549-7c54-961b6f6847bf" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.359 [INFO][3844] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" iface="eth0" netns="/var/run/netns/cni-2d90953d-fb03-2549-7c54-961b6f6847bf" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.359 [INFO][3844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.359 [INFO][3844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.544 [INFO][3851] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.547 [INFO][3851] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.548 [INFO][3851] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.576 [WARNING][3851] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.577 [INFO][3851] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.581 [INFO][3851] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:47.587144 containerd[1505]: 2025-11-01 00:22:47.584 [INFO][3844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:22:47.592304 containerd[1505]: time="2025-11-01T00:22:47.588813871Z" level=info msg="TearDown network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" successfully" Nov 1 00:22:47.592304 containerd[1505]: time="2025-11-01T00:22:47.588846776Z" level=info msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" returns successfully" Nov 1 00:22:47.594024 systemd[1]: run-netns-cni\x2d2d90953d\x2dfb03\x2d2549\x2d7c54\x2d961b6f6847bf.mount: Deactivated successfully. Nov 1 00:22:47.712500 kubelet[2652]: I1101 00:22:47.712445 2652 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-backend-key-pair\") pod \"1f231fcb-ab47-4501-8198-d40b1b2412b1\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " Nov 1 00:22:47.734740 kubelet[2652]: I1101 00:22:47.732947 2652 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-ca-bundle\") pod \"1f231fcb-ab47-4501-8198-d40b1b2412b1\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " Nov 1 00:22:47.734740 kubelet[2652]: I1101 00:22:47.733040 2652 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtnl\" (UniqueName: \"kubernetes.io/projected/1f231fcb-ab47-4501-8198-d40b1b2412b1-kube-api-access-hqtnl\") pod \"1f231fcb-ab47-4501-8198-d40b1b2412b1\" (UID: \"1f231fcb-ab47-4501-8198-d40b1b2412b1\") " Nov 1 00:22:47.746634 systemd[1]: var-lib-kubelet-pods-1f231fcb\x2dab47\x2d4501\x2d8198\x2dd40b1b2412b1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:22:47.755178 systemd[1]: var-lib-kubelet-pods-1f231fcb\x2dab47\x2d4501\x2d8198\x2dd40b1b2412b1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhqtnl.mount: Deactivated successfully. Nov 1 00:22:47.768125 kubelet[2652]: I1101 00:22:47.766022 2652 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1f231fcb-ab47-4501-8198-d40b1b2412b1" (UID: "1f231fcb-ab47-4501-8198-d40b1b2412b1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:22:47.768560 kubelet[2652]: I1101 00:22:47.765987 2652 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1f231fcb-ab47-4501-8198-d40b1b2412b1" (UID: "1f231fcb-ab47-4501-8198-d40b1b2412b1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:22:47.768791 kubelet[2652]: I1101 00:22:47.768451 2652 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f231fcb-ab47-4501-8198-d40b1b2412b1-kube-api-access-hqtnl" (OuterVolumeSpecName: "kube-api-access-hqtnl") pod "1f231fcb-ab47-4501-8198-d40b1b2412b1" (UID: "1f231fcb-ab47-4501-8198-d40b1b2412b1"). InnerVolumeSpecName "kube-api-access-hqtnl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:22:47.844747 kubelet[2652]: I1101 00:22:47.844447 2652 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hqtnl\" (UniqueName: \"kubernetes.io/projected/1f231fcb-ab47-4501-8198-d40b1b2412b1-kube-api-access-hqtnl\") on node \"ci-4081-3-6-n-a2a464dc28\" DevicePath \"\"" Nov 1 00:22:47.844747 kubelet[2652]: I1101 00:22:47.844529 2652 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-ca-bundle\") on node \"ci-4081-3-6-n-a2a464dc28\" DevicePath \"\"" Nov 1 00:22:47.844747 kubelet[2652]: I1101 00:22:47.844555 2652 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1f231fcb-ab47-4501-8198-d40b1b2412b1-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-a2a464dc28\" DevicePath \"\"" Nov 1 00:22:48.064530 systemd[1]: Removed slice kubepods-besteffort-pod1f231fcb_ab47_4501_8198_d40b1b2412b1.slice - libcontainer container kubepods-besteffort-pod1f231fcb_ab47_4501_8198_d40b1b2412b1.slice. Nov 1 00:22:48.386460 kubelet[2652]: I1101 00:22:48.386413 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:48.427454 kubelet[2652]: I1101 00:22:48.422066 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lzsh4" podStartSLOduration=2.621943043 podStartE2EDuration="15.407527848s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="2025-11-01 00:22:33.764775437 +0000 UTC m=+21.904288280" lastFinishedPulling="2025-11-01 00:22:46.550360212 +0000 UTC m=+34.689873085" observedRunningTime="2025-11-01 00:22:47.397714921 +0000 UTC m=+35.537227764" watchObservedRunningTime="2025-11-01 00:22:48.407527848 +0000 UTC m=+36.547040731" Nov 1 00:22:48.595393 systemd[1]: Created slice kubepods-besteffort-pod6d4596f8_201b_4071_856f_d068e8d1a4cc.slice - libcontainer container kubepods-besteffort-pod6d4596f8_201b_4071_856f_d068e8d1a4cc.slice. Nov 1 00:22:48.689083 kubelet[2652]: I1101 00:22:48.688895 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6d4596f8-201b-4071-856f-d068e8d1a4cc-whisker-backend-key-pair\") pod \"whisker-784c7f6667-sp4fm\" (UID: \"6d4596f8-201b-4071-856f-d068e8d1a4cc\") " pod="calico-system/whisker-784c7f6667-sp4fm" Nov 1 00:22:48.689083 kubelet[2652]: I1101 00:22:48.688942 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xlr\" (UniqueName: \"kubernetes.io/projected/6d4596f8-201b-4071-856f-d068e8d1a4cc-kube-api-access-w7xlr\") pod \"whisker-784c7f6667-sp4fm\" (UID: \"6d4596f8-201b-4071-856f-d068e8d1a4cc\") " pod="calico-system/whisker-784c7f6667-sp4fm" Nov 1 00:22:48.689083 kubelet[2652]: I1101 00:22:48.688959 2652 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d4596f8-201b-4071-856f-d068e8d1a4cc-whisker-ca-bundle\") pod \"whisker-784c7f6667-sp4fm\" (UID: \"6d4596f8-201b-4071-856f-d068e8d1a4cc\") " pod="calico-system/whisker-784c7f6667-sp4fm" Nov 1 00:22:48.908879 containerd[1505]: time="2025-11-01T00:22:48.908807443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784c7f6667-sp4fm,Uid:6d4596f8-201b-4071-856f-d068e8d1a4cc,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:49.128285 systemd-networkd[1391]: califf441ccfbd2: Link UP Nov 1 00:22:49.128547 systemd-networkd[1391]: califf441ccfbd2: Gained carrier Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:48.992 [INFO][3970] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.007 [INFO][3970] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0 whisker-784c7f6667- calico-system 6d4596f8-201b-4071-856f-d068e8d1a4cc 905 0 2025-11-01 00:22:48 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:784c7f6667 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 whisker-784c7f6667-sp4fm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califf441ccfbd2 [] [] }} ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.007 [INFO][3970] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.043 [INFO][3978] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" HandleID="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.043 [INFO][3978] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" HandleID="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"whisker-784c7f6667-sp4fm", "timestamp":"2025-11-01 00:22:49.043380749 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.043 [INFO][3978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.043 [INFO][3978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.043 [INFO][3978] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.054 [INFO][3978] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.065 [INFO][3978] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.071 [INFO][3978] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.075 [INFO][3978] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.077 [INFO][3978] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.077 [INFO][3978] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.079 [INFO][3978] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55 Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.087 [INFO][3978] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.096 [INFO][3978] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.65/26] block=192.168.104.64/26 handle="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.096 [INFO][3978] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.65/26] handle="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.097 [INFO][3978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:49.152892 containerd[1505]: 2025-11-01 00:22:49.097 [INFO][3978] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.65/26] IPv6=[] ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" HandleID="k8s-pod-network.498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.105 [INFO][3970] cni-plugin/k8s.go 418: Populated endpoint ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0", GenerateName:"whisker-784c7f6667-", Namespace:"calico-system", SelfLink:"", UID:"6d4596f8-201b-4071-856f-d068e8d1a4cc", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784c7f6667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"whisker-784c7f6667-sp4fm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf441ccfbd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.106 [INFO][3970] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.65/32] ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.106 [INFO][3970] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf441ccfbd2 ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.127 [INFO][3970] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.127 [INFO][3970] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0", GenerateName:"whisker-784c7f6667-", Namespace:"calico-system", SelfLink:"", UID:"6d4596f8-201b-4071-856f-d068e8d1a4cc", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"784c7f6667", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55", Pod:"whisker-784c7f6667-sp4fm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.104.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califf441ccfbd2", MAC:"d6:4c:a2:72:69:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:49.155204 containerd[1505]: 2025-11-01 00:22:49.147 [INFO][3970] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55" Namespace="calico-system" Pod="whisker-784c7f6667-sp4fm" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--784c7f6667--sp4fm-eth0" Nov 1 00:22:49.191070 containerd[1505]: time="2025-11-01T00:22:49.190758198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:49.191070 containerd[1505]: time="2025-11-01T00:22:49.190985824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:49.191070 containerd[1505]: time="2025-11-01T00:22:49.191043721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:49.193791 containerd[1505]: time="2025-11-01T00:22:49.193713101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:49.223799 systemd[1]: Started cri-containerd-498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55.scope - libcontainer container 498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55. Nov 1 00:22:49.280953 containerd[1505]: time="2025-11-01T00:22:49.280909230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-784c7f6667-sp4fm,Uid:6d4596f8-201b-4071-856f-d068e8d1a4cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55\"" Nov 1 00:22:49.282736 containerd[1505]: time="2025-11-01T00:22:49.282526000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:49.726689 containerd[1505]: time="2025-11-01T00:22:49.726598218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:49.743336 containerd[1505]: time="2025-11-01T00:22:49.728950213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:49.743561 containerd[1505]: time="2025-11-01T00:22:49.729169754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:22:49.744107 kubelet[2652]: E1101 00:22:49.743872 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:49.745141 kubelet[2652]: E1101 00:22:49.745055 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:49.761722 kubelet[2652]: E1101 00:22:49.761608 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d162083005e04424a0cd373354941ab6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:49.763933 containerd[1505]: time="2025-11-01T00:22:49.763866629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:49.804686 systemd[1]: run-containerd-runc-k8s.io-498a92910ecea1717fc15c1b6f9e3aa4c154ce71bc51f154c5cc5db9bb3f5a55-runc.5QMMpM.mount: Deactivated successfully. Nov 1 00:22:50.030149 kubelet[2652]: I1101 00:22:50.030056 2652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f231fcb-ab47-4501-8198-d40b1b2412b1" path="/var/lib/kubelet/pods/1f231fcb-ab47-4501-8198-d40b1b2412b1/volumes" Nov 1 00:22:50.200782 containerd[1505]: time="2025-11-01T00:22:50.200710756Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:50.202585 containerd[1505]: time="2025-11-01T00:22:50.202497636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:50.202692 containerd[1505]: time="2025-11-01T00:22:50.202597285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:50.202972 kubelet[2652]: E1101 00:22:50.202903 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:50.203085 kubelet[2652]: E1101 00:22:50.202977 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:50.203227 kubelet[2652]: E1101 00:22:50.203148 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:50.205204 kubelet[2652]: E1101 00:22:50.204656 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:22:50.401685 kubelet[2652]: E1101 00:22:50.401594 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:22:50.405909 systemd-networkd[1391]: califf441ccfbd2: Gained IPv6LL Nov 1 00:22:54.435025 kubelet[2652]: I1101 00:22:54.434946 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:22:54.655257 systemd[1]: run-containerd-runc-k8s.io-2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23-runc.JjY1MT.mount: Deactivated successfully. Nov 1 00:22:55.029822 containerd[1505]: time="2025-11-01T00:22:55.029271414Z" level=info msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.105 [INFO][4187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.106 [INFO][4187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" iface="eth0" netns="/var/run/netns/cni-5f9bec96-c599-5f04-f0fe-8b12923fcbf9" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.106 [INFO][4187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" iface="eth0" netns="/var/run/netns/cni-5f9bec96-c599-5f04-f0fe-8b12923fcbf9" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.107 [INFO][4187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" iface="eth0" netns="/var/run/netns/cni-5f9bec96-c599-5f04-f0fe-8b12923fcbf9" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.107 [INFO][4187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.107 [INFO][4187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.145 [INFO][4194] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.146 [INFO][4194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.146 [INFO][4194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.155 [WARNING][4194] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.155 [INFO][4194] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.158 [INFO][4194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:55.164511 containerd[1505]: 2025-11-01 00:22:55.161 [INFO][4187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:22:55.169134 containerd[1505]: time="2025-11-01T00:22:55.164786087Z" level=info msg="TearDown network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" successfully" Nov 1 00:22:55.169134 containerd[1505]: time="2025-11-01T00:22:55.164817539Z" level=info msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" returns successfully" Nov 1 00:22:55.169134 containerd[1505]: time="2025-11-01T00:22:55.167886349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-6t7nj,Uid:fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:55.171192 systemd[1]: run-netns-cni\x2d5f9bec96\x2dc599\x2d5f04\x2df0fe\x2d8b12923fcbf9.mount: Deactivated successfully. Nov 1 00:22:55.374253 systemd-networkd[1391]: calid980c389c04: Link UP Nov 1 00:22:55.374936 systemd-networkd[1391]: calid980c389c04: Gained carrier Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.262 [INFO][4201] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.282 [INFO][4201] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0 calico-apiserver-7b55fd6955- calico-apiserver fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e 941 0 2025-11-01 00:22:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b55fd6955 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 calico-apiserver-7b55fd6955-6t7nj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid980c389c04 [] [] }} ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.282 [INFO][4201] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.320 [INFO][4212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" HandleID="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.321 [INFO][4212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" HandleID="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"calico-apiserver-7b55fd6955-6t7nj", "timestamp":"2025-11-01 00:22:55.320894808 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.321 [INFO][4212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.321 [INFO][4212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.321 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.332 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.338 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.345 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.347 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.350 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.350 [INFO][4212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.352 [INFO][4212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.358 [INFO][4212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.367 [INFO][4212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.66/26] block=192.168.104.64/26 handle="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.367 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.66/26] handle="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.367 [INFO][4212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:55.397957 containerd[1505]: 2025-11-01 00:22:55.368 [INFO][4212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.66/26] IPv6=[] ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" HandleID="k8s-pod-network.6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.371 [INFO][4201] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"calico-apiserver-7b55fd6955-6t7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid980c389c04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.371 [INFO][4201] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.66/32] ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.371 [INFO][4201] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid980c389c04 ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.376 [INFO][4201] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.376 [INFO][4201] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce", Pod:"calico-apiserver-7b55fd6955-6t7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid980c389c04", MAC:"1a:d4:46:14:89:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:55.402131 containerd[1505]: 2025-11-01 00:22:55.392 [INFO][4201] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-6t7nj" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:22:55.430819 containerd[1505]: time="2025-11-01T00:22:55.430130558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:55.430819 containerd[1505]: time="2025-11-01T00:22:55.430215385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:55.430819 containerd[1505]: time="2025-11-01T00:22:55.430235685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:55.430819 containerd[1505]: time="2025-11-01T00:22:55.430491569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:55.451936 systemd[1]: Started cri-containerd-6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce.scope - libcontainer container 6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce. Nov 1 00:22:55.504960 containerd[1505]: time="2025-11-01T00:22:55.504917165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-6t7nj,Uid:fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce\"" Nov 1 00:22:55.509164 containerd[1505]: time="2025-11-01T00:22:55.509106715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:55.963085 containerd[1505]: time="2025-11-01T00:22:55.962937994Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:55.965420 containerd[1505]: time="2025-11-01T00:22:55.965274515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:55.965420 containerd[1505]: time="2025-11-01T00:22:55.965325235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:55.966109 kubelet[2652]: E1101 00:22:55.965590 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:55.966109 kubelet[2652]: E1101 00:22:55.965710 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:55.966109 kubelet[2652]: E1101 00:22:55.965906 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:55.968067 kubelet[2652]: E1101 00:22:55.967958 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:22:56.031302 containerd[1505]: time="2025-11-01T00:22:56.030107497Z" level=info msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" Nov 1 00:22:56.031302 containerd[1505]: time="2025-11-01T00:22:56.030333978Z" level=info msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.133 [INFO][4308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.134 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" iface="eth0" netns="/var/run/netns/cni-c4519ec7-e136-804f-d0e2-dd2fa137658f" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.135 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" iface="eth0" netns="/var/run/netns/cni-c4519ec7-e136-804f-d0e2-dd2fa137658f" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.135 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" iface="eth0" netns="/var/run/netns/cni-c4519ec7-e136-804f-d0e2-dd2fa137658f" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.135 [INFO][4308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.135 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.160 [INFO][4326] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.161 [INFO][4326] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.161 [INFO][4326] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.167 [WARNING][4326] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.167 [INFO][4326] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.169 [INFO][4326] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:56.172457 containerd[1505]: 2025-11-01 00:22:56.171 [INFO][4308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:22:56.174861 containerd[1505]: time="2025-11-01T00:22:56.172988621Z" level=info msg="TearDown network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" successfully" Nov 1 00:22:56.174861 containerd[1505]: time="2025-11-01T00:22:56.173048915Z" level=info msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" returns successfully" Nov 1 00:22:56.176974 containerd[1505]: time="2025-11-01T00:22:56.176957357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-62wdq,Uid:9d4fd33c-57a2-484f-b033-ef3d888b08dc,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:56.179867 systemd[1]: run-netns-cni\x2dc4519ec7\x2de136\x2d804f\x2dd0e2\x2ddd2fa137658f.mount: Deactivated successfully. Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.128 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.129 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" iface="eth0" netns="/var/run/netns/cni-10f63d0b-60ff-2a32-9708-602e7b69ab61" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.129 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" iface="eth0" netns="/var/run/netns/cni-10f63d0b-60ff-2a32-9708-602e7b69ab61" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.129 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" iface="eth0" netns="/var/run/netns/cni-10f63d0b-60ff-2a32-9708-602e7b69ab61" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.129 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.130 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.162 [INFO][4321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.163 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.169 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.181 [WARNING][4321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.182 [INFO][4321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.186 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:56.192880 containerd[1505]: 2025-11-01 00:22:56.189 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:22:56.194848 containerd[1505]: time="2025-11-01T00:22:56.193214291Z" level=info msg="TearDown network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" successfully" Nov 1 00:22:56.194848 containerd[1505]: time="2025-11-01T00:22:56.193257165Z" level=info msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" returns successfully" Nov 1 00:22:56.196260 containerd[1505]: time="2025-11-01T00:22:56.195919054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9dfb6c85-btn4p,Uid:98898523-1f05-472a-90a7-fe467ee6a22e,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:56.197170 systemd[1]: run-netns-cni\x2d10f63d0b\x2d60ff\x2d2a32\x2d9708\x2d602e7b69ab61.mount: Deactivated successfully. Nov 1 00:22:56.344643 systemd-networkd[1391]: cali5a6fb0debf1: Link UP Nov 1 00:22:56.344921 systemd-networkd[1391]: cali5a6fb0debf1: Gained carrier Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.247 [INFO][4335] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.260 [INFO][4335] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0 goldmane-666569f655- calico-system 9d4fd33c-57a2-484f-b033-ef3d888b08dc 952 0 2025-11-01 00:22:31 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 goldmane-666569f655-62wdq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5a6fb0debf1 [] [] }} ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.260 [INFO][4335] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.298 [INFO][4361] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" HandleID="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.298 [INFO][4361] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" HandleID="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"goldmane-666569f655-62wdq", "timestamp":"2025-11-01 00:22:56.298733941 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.299 [INFO][4361] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.299 [INFO][4361] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.299 [INFO][4361] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.305 [INFO][4361] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.310 [INFO][4361] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.315 [INFO][4361] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.317 [INFO][4361] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.319 [INFO][4361] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.319 [INFO][4361] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.321 [INFO][4361] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395 Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.326 [INFO][4361] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.334 [INFO][4361] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.67/26] block=192.168.104.64/26 handle="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.335 [INFO][4361] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.67/26] handle="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.335 [INFO][4361] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:56.367548 containerd[1505]: 2025-11-01 00:22:56.335 [INFO][4361] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.67/26] IPv6=[] ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" HandleID="k8s-pod-network.f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.341 [INFO][4335] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d4fd33c-57a2-484f-b033-ef3d888b08dc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"goldmane-666569f655-62wdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a6fb0debf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.341 [INFO][4335] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.67/32] ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.341 [INFO][4335] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a6fb0debf1 ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.344 [INFO][4335] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.345 [INFO][4335] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d4fd33c-57a2-484f-b033-ef3d888b08dc", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395", Pod:"goldmane-666569f655-62wdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a6fb0debf1", MAC:"2e:f7:ec:39:a6:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:56.368413 containerd[1505]: 2025-11-01 00:22:56.364 [INFO][4335] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395" Namespace="calico-system" Pod="goldmane-666569f655-62wdq" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:22:56.389747 containerd[1505]: time="2025-11-01T00:22:56.389606858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.389747 containerd[1505]: time="2025-11-01T00:22:56.389684192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.389747 containerd[1505]: time="2025-11-01T00:22:56.389695271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.390078 containerd[1505]: time="2025-11-01T00:22:56.389767826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.404845 systemd[1]: Started cri-containerd-f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395.scope - libcontainer container f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395. Nov 1 00:22:56.420097 kubelet[2652]: E1101 00:22:56.420060 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:22:56.468462 systemd-networkd[1391]: cali2bd76764ea2: Link UP Nov 1 00:22:56.469204 systemd-networkd[1391]: cali2bd76764ea2: Gained carrier Nov 1 00:22:56.475520 containerd[1505]: time="2025-11-01T00:22:56.475132858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-62wdq,Uid:9d4fd33c-57a2-484f-b033-ef3d888b08dc,Namespace:calico-system,Attempt:1,} returns sandbox id \"f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395\"" Nov 1 00:22:56.478932 containerd[1505]: time="2025-11-01T00:22:56.478900076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.254 [INFO][4349] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.265 [INFO][4349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0 calico-kube-controllers-6d9dfb6c85- calico-system 98898523-1f05-472a-90a7-fe467ee6a22e 951 0 2025-11-01 00:22:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d9dfb6c85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 calico-kube-controllers-6d9dfb6c85-btn4p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2bd76764ea2 [] [] }} ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.265 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.300 [INFO][4366] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" HandleID="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.300 [INFO][4366] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" HandleID="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"calico-kube-controllers-6d9dfb6c85-btn4p", "timestamp":"2025-11-01 00:22:56.300642478 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.300 [INFO][4366] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.335 [INFO][4366] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.335 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.406 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.413 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.418 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.421 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.426 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.426 [INFO][4366] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.428 [INFO][4366] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6 Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.437 [INFO][4366] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.448 [INFO][4366] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.68/26] block=192.168.104.64/26 handle="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.448 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.68/26] handle="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.448 [INFO][4366] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:56.488517 containerd[1505]: 2025-11-01 00:22:56.448 [INFO][4366] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.68/26] IPv6=[] ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" HandleID="k8s-pod-network.81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.456 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0", GenerateName:"calico-kube-controllers-6d9dfb6c85-", Namespace:"calico-system", SelfLink:"", UID:"98898523-1f05-472a-90a7-fe467ee6a22e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9dfb6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"calico-kube-controllers-6d9dfb6c85-btn4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bd76764ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.456 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.68/32] ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.456 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bd76764ea2 ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.469 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.469 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0", GenerateName:"calico-kube-controllers-6d9dfb6c85-", Namespace:"calico-system", SelfLink:"", UID:"98898523-1f05-472a-90a7-fe467ee6a22e", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9dfb6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6", Pod:"calico-kube-controllers-6d9dfb6c85-btn4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bd76764ea2", MAC:"9a:0b:d9:34:6f:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:56.489991 containerd[1505]: 2025-11-01 00:22:56.483 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6" Namespace="calico-system" Pod="calico-kube-controllers-6d9dfb6c85-btn4p" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:22:56.507467 containerd[1505]: time="2025-11-01T00:22:56.507256273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.507467 containerd[1505]: time="2025-11-01T00:22:56.507304086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.507467 containerd[1505]: time="2025-11-01T00:22:56.507319501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.507467 containerd[1505]: time="2025-11-01T00:22:56.507388381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.520924 systemd[1]: Started cri-containerd-81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6.scope - libcontainer container 81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6. Nov 1 00:22:56.564363 containerd[1505]: time="2025-11-01T00:22:56.564259939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d9dfb6c85-btn4p,Uid:98898523-1f05-472a-90a7-fe467ee6a22e,Namespace:calico-system,Attempt:1,} returns sandbox id \"81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6\"" Nov 1 00:22:56.741858 systemd-networkd[1391]: calid980c389c04: Gained IPv6LL Nov 1 00:22:56.923488 containerd[1505]: time="2025-11-01T00:22:56.923411499Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:56.925818 containerd[1505]: time="2025-11-01T00:22:56.925645108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:56.925818 containerd[1505]: time="2025-11-01T00:22:56.925710462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:56.926108 kubelet[2652]: E1101 00:22:56.926004 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:56.926108 kubelet[2652]: E1101 00:22:56.926094 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:56.927006 containerd[1505]: time="2025-11-01T00:22:56.926474070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:56.927156 kubelet[2652]: E1101 00:22:56.926436 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqq5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:56.928959 kubelet[2652]: E1101 00:22:56.928796 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:22:57.028967 containerd[1505]: time="2025-11-01T00:22:57.028424857Z" level=info msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" Nov 1 00:22:57.029262 containerd[1505]: time="2025-11-01T00:22:57.029230995Z" level=info msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" Nov 1 00:22:57.034318 containerd[1505]: time="2025-11-01T00:22:57.034143786Z" level=info msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.136 [INFO][4514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.136 [INFO][4514] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" iface="eth0" netns="/var/run/netns/cni-a76b6847-1d54-3dbb-eacd-2d615d9e915f" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.137 [INFO][4514] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" iface="eth0" netns="/var/run/netns/cni-a76b6847-1d54-3dbb-eacd-2d615d9e915f" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.142 [INFO][4514] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" iface="eth0" netns="/var/run/netns/cni-a76b6847-1d54-3dbb-eacd-2d615d9e915f" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.147 [INFO][4514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.147 [INFO][4514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.184 [INFO][4547] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.184 [INFO][4547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.184 [INFO][4547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.192 [WARNING][4547] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.192 [INFO][4547] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.194 [INFO][4547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.198002 containerd[1505]: 2025-11-01 00:22:57.196 [INFO][4514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:22:57.198584 containerd[1505]: time="2025-11-01T00:22:57.198558277Z" level=info msg="TearDown network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" successfully" Nov 1 00:22:57.198653 containerd[1505]: time="2025-11-01T00:22:57.198642312Z" level=info msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" returns successfully" Nov 1 00:22:57.201599 containerd[1505]: time="2025-11-01T00:22:57.201537474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rqgg,Uid:ca02817f-7150-4fe5-a77c-3db57eb2bbb9,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:57.203364 systemd[1]: run-netns-cni\x2da76b6847\x2d1d54\x2d3dbb\x2deacd\x2d2d615d9e915f.mount: Deactivated successfully. Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.143 [INFO][4525] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.144 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" iface="eth0" netns="/var/run/netns/cni-a95a738a-f502-02e9-2167-aefb0dc84b61" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.144 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" iface="eth0" netns="/var/run/netns/cni-a95a738a-f502-02e9-2167-aefb0dc84b61" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.144 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" iface="eth0" netns="/var/run/netns/cni-a95a738a-f502-02e9-2167-aefb0dc84b61" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.144 [INFO][4525] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.144 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.207 [INFO][4542] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.207 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.208 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.215 [WARNING][4542] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.216 [INFO][4542] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.218 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.226228 containerd[1505]: 2025-11-01 00:22:57.220 [INFO][4525] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:22:57.231354 containerd[1505]: time="2025-11-01T00:22:57.229870953Z" level=info msg="TearDown network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" successfully" Nov 1 00:22:57.231354 containerd[1505]: time="2025-11-01T00:22:57.229898790Z" level=info msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" returns successfully" Nov 1 00:22:57.231049 systemd[1]: run-netns-cni\x2da95a738a\x2df502\x2d02e9\x2d2167\x2daefb0dc84b61.mount: Deactivated successfully. Nov 1 00:22:57.231538 containerd[1505]: time="2025-11-01T00:22:57.231385069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lkfc,Uid:ae9e8348-8b23-4471-92e0-30ed8445c882,Namespace:calico-system,Attempt:1,}" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.173 [INFO][4526] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.173 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" iface="eth0" netns="/var/run/netns/cni-41165ae6-5a52-fef2-0b9d-ba4e14941816" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.174 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" iface="eth0" netns="/var/run/netns/cni-41165ae6-5a52-fef2-0b9d-ba4e14941816" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.174 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" iface="eth0" netns="/var/run/netns/cni-41165ae6-5a52-fef2-0b9d-ba4e14941816" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.174 [INFO][4526] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.174 [INFO][4526] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.226 [INFO][4552] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.232 [INFO][4552] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.232 [INFO][4552] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.241 [WARNING][4552] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.241 [INFO][4552] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.243 [INFO][4552] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.250993 containerd[1505]: 2025-11-01 00:22:57.246 [INFO][4526] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:22:57.250993 containerd[1505]: time="2025-11-01T00:22:57.250888271Z" level=info msg="TearDown network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" successfully" Nov 1 00:22:57.250993 containerd[1505]: time="2025-11-01T00:22:57.250908486Z" level=info msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" returns successfully" Nov 1 00:22:57.252404 containerd[1505]: time="2025-11-01T00:22:57.252151512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpnbt,Uid:24c1a2ed-5b74-4228-b907-6de81bcc9c41,Namespace:kube-system,Attempt:1,}" Nov 1 00:22:57.255935 systemd[1]: run-netns-cni\x2d41165ae6\x2d5a52\x2dfef2\x2d0b9d\x2dba4e14941816.mount: Deactivated successfully. Nov 1 00:22:57.372594 systemd-networkd[1391]: cali6eb9fa905f7: Link UP Nov 1 00:22:57.374925 systemd-networkd[1391]: cali6eb9fa905f7: Gained carrier Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.258 [INFO][4563] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.268 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0 coredns-668d6bf9bc- kube-system ca02817f-7150-4fe5-a77c-3db57eb2bbb9 973 0 2025-11-01 00:22:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 coredns-668d6bf9bc-6rqgg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6eb9fa905f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.268 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4588] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" HandleID="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4588] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" HandleID="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"coredns-668d6bf9bc-6rqgg", "timestamp":"2025-11-01 00:22:57.310238882 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.318 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.330 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.337 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.339 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.342 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.343 [INFO][4588] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.345 [INFO][4588] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952 Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.351 [INFO][4588] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4588] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.69/26] block=192.168.104.64/26 handle="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.69/26] handle="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.390283 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4588] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.69/26] IPv6=[] ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" HandleID="k8s-pod-network.6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.365 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca02817f-7150-4fe5-a77c-3db57eb2bbb9", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"coredns-668d6bf9bc-6rqgg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb9fa905f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.366 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.69/32] ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.366 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6eb9fa905f7 ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.375 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.376 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca02817f-7150-4fe5-a77c-3db57eb2bbb9", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952", Pod:"coredns-668d6bf9bc-6rqgg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb9fa905f7", MAC:"06:f8:ca:af:94:0b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.391453 containerd[1505]: 2025-11-01 00:22:57.388 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952" Namespace="kube-system" Pod="coredns-668d6bf9bc-6rqgg" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:22:57.400531 containerd[1505]: time="2025-11-01T00:22:57.400409545Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:57.401420 containerd[1505]: time="2025-11-01T00:22:57.401389285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:57.401570 containerd[1505]: time="2025-11-01T00:22:57.401542130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:22:57.402950 kubelet[2652]: E1101 00:22:57.402893 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:57.403225 kubelet[2652]: E1101 00:22:57.402989 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:57.403586 kubelet[2652]: E1101 00:22:57.403379 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:57.404838 kubelet[2652]: E1101 00:22:57.404793 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:22:57.414609 containerd[1505]: time="2025-11-01T00:22:57.413260717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:57.414609 containerd[1505]: time="2025-11-01T00:22:57.413340386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:57.414609 containerd[1505]: time="2025-11-01T00:22:57.413349702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.414609 containerd[1505]: time="2025-11-01T00:22:57.413417309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.425056 kubelet[2652]: E1101 00:22:57.424735 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:22:57.427286 kubelet[2652]: E1101 00:22:57.426931 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:22:57.427902 kubelet[2652]: E1101 00:22:57.427319 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:22:57.438249 systemd[1]: Started cri-containerd-6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952.scope - libcontainer container 6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952. Nov 1 00:22:57.493935 systemd-networkd[1391]: calid83019da844: Link UP Nov 1 00:22:57.497816 systemd-networkd[1391]: calid83019da844: Gained carrier Nov 1 00:22:57.511598 systemd-networkd[1391]: cali2bd76764ea2: Gained IPv6LL Nov 1 00:22:57.513161 containerd[1505]: time="2025-11-01T00:22:57.511640461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6rqgg,Uid:ca02817f-7150-4fe5-a77c-3db57eb2bbb9,Namespace:kube-system,Attempt:1,} returns sandbox id \"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952\"" Nov 1 00:22:57.512095 systemd-networkd[1391]: cali5a6fb0debf1: Gained IPv6LL Nov 1 00:22:57.515702 containerd[1505]: time="2025-11-01T00:22:57.514686775Z" level=info msg="CreateContainer within sandbox \"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.310 [INFO][4592] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.324 [INFO][4592] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0 coredns-668d6bf9bc- kube-system 24c1a2ed-5b74-4228-b907-6de81bcc9c41 975 0 2025-11-01 00:22:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 coredns-668d6bf9bc-gpnbt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid83019da844 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.324 [INFO][4592] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.354 [INFO][4618] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" HandleID="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.355 [INFO][4618] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" HandleID="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"coredns-668d6bf9bc-gpnbt", "timestamp":"2025-11-01 00:22:57.3548757 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.355 [INFO][4618] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4618] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.362 [INFO][4618] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.419 [INFO][4618] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.431 [INFO][4618] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.442 [INFO][4618] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.448 [INFO][4618] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.454 [INFO][4618] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.454 [INFO][4618] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.463 [INFO][4618] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.472 [INFO][4618] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4618] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.70/26] block=192.168.104.64/26 handle="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4618] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.70/26] handle="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4618] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.551718 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4618] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.70/26] IPv6=[] ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" HandleID="k8s-pod-network.2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.488 [INFO][4592] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"24c1a2ed-5b74-4228-b907-6de81bcc9c41", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"coredns-668d6bf9bc-gpnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid83019da844", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.488 [INFO][4592] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.70/32] ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.488 [INFO][4592] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid83019da844 ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.499 [INFO][4592] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.503 [INFO][4592] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"24c1a2ed-5b74-4228-b907-6de81bcc9c41", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa", Pod:"coredns-668d6bf9bc-gpnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid83019da844", MAC:"06:2f:e0:4c:45:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.553924 containerd[1505]: 2025-11-01 00:22:57.534 [INFO][4592] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa" Namespace="kube-system" Pod="coredns-668d6bf9bc-gpnbt" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:22:57.580843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3714930574.mount: Deactivated successfully. Nov 1 00:22:57.590206 containerd[1505]: time="2025-11-01T00:22:57.590072072Z" level=info msg="CreateContainer within sandbox \"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e98ddc1195f659d7a490dc233619d2a38fd8a6c2beda5aff9a44eb8b38533866\"" Nov 1 00:22:57.591700 containerd[1505]: time="2025-11-01T00:22:57.591253872Z" level=info msg="StartContainer for \"e98ddc1195f659d7a490dc233619d2a38fd8a6c2beda5aff9a44eb8b38533866\"" Nov 1 00:22:57.602057 systemd-networkd[1391]: cali024e77ed1b6: Link UP Nov 1 00:22:57.603214 systemd-networkd[1391]: cali024e77ed1b6: Gained carrier Nov 1 00:22:57.620156 containerd[1505]: time="2025-11-01T00:22:57.619740065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:57.620156 containerd[1505]: time="2025-11-01T00:22:57.619866424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:57.620156 containerd[1505]: time="2025-11-01T00:22:57.619901775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.620156 containerd[1505]: time="2025-11-01T00:22:57.620038282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.295 [INFO][4577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.316 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0 csi-node-driver- calico-system ae9e8348-8b23-4471-92e0-30ed8445c882 974 0 2025-11-01 00:22:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 csi-node-driver-4lkfc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali024e77ed1b6 [] [] }} ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.316 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.369 [INFO][4613] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" HandleID="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.370 [INFO][4613] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" HandleID="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"csi-node-driver-4lkfc", "timestamp":"2025-11-01 00:22:57.369938128 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.370 [INFO][4613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.482 [INFO][4613] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.525 [INFO][4613] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.539 [INFO][4613] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.545 [INFO][4613] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.549 [INFO][4613] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.554 [INFO][4613] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.554 [INFO][4613] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.558 [INFO][4613] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.570 [INFO][4613] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.587 [INFO][4613] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.71/26] block=192.168.104.64/26 handle="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.588 [INFO][4613] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.71/26] handle="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.588 [INFO][4613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:57.634318 containerd[1505]: 2025-11-01 00:22:57.588 [INFO][4613] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.71/26] IPv6=[] ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" HandleID="k8s-pod-network.768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.595 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae9e8348-8b23-4471-92e0-30ed8445c882", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"csi-node-driver-4lkfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali024e77ed1b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.595 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.71/32] ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.595 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali024e77ed1b6 ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.603 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.605 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae9e8348-8b23-4471-92e0-30ed8445c882", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b", Pod:"csi-node-driver-4lkfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali024e77ed1b6", MAC:"9a:5c:ba:fe:0d:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:57.635402 containerd[1505]: 2025-11-01 00:22:57.625 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b" Namespace="calico-system" Pod="csi-node-driver-4lkfc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:22:57.636987 systemd[1]: Started cri-containerd-e98ddc1195f659d7a490dc233619d2a38fd8a6c2beda5aff9a44eb8b38533866.scope - libcontainer container e98ddc1195f659d7a490dc233619d2a38fd8a6c2beda5aff9a44eb8b38533866. Nov 1 00:22:57.660853 systemd[1]: Started cri-containerd-2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa.scope - libcontainer container 2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa. Nov 1 00:22:57.665784 containerd[1505]: time="2025-11-01T00:22:57.665676488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:57.666688 containerd[1505]: time="2025-11-01T00:22:57.666051148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:57.666688 containerd[1505]: time="2025-11-01T00:22:57.666159536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.667238 containerd[1505]: time="2025-11-01T00:22:57.666692701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:57.697237 systemd[1]: Started cri-containerd-768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b.scope - libcontainer container 768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b. Nov 1 00:22:57.704202 containerd[1505]: time="2025-11-01T00:22:57.704162277Z" level=info msg="StartContainer for \"e98ddc1195f659d7a490dc233619d2a38fd8a6c2beda5aff9a44eb8b38533866\" returns successfully" Nov 1 00:22:57.726634 containerd[1505]: time="2025-11-01T00:22:57.726378695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gpnbt,Uid:24c1a2ed-5b74-4228-b907-6de81bcc9c41,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa\"" Nov 1 00:22:57.734991 containerd[1505]: time="2025-11-01T00:22:57.734642138Z" level=info msg="CreateContainer within sandbox \"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:22:57.751146 containerd[1505]: time="2025-11-01T00:22:57.751016181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4lkfc,Uid:ae9e8348-8b23-4471-92e0-30ed8445c882,Namespace:calico-system,Attempt:1,} returns sandbox id \"768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b\"" Nov 1 00:22:57.753440 containerd[1505]: time="2025-11-01T00:22:57.753376336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:57.757019 containerd[1505]: time="2025-11-01T00:22:57.756847497Z" level=info msg="CreateContainer within sandbox \"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30b510c37f0b1574fc79f633919bb27a4803857792a21085c279683ddc6e1441\"" Nov 1 00:22:57.758540 containerd[1505]: time="2025-11-01T00:22:57.757735537Z" level=info msg="StartContainer for \"30b510c37f0b1574fc79f633919bb27a4803857792a21085c279683ddc6e1441\"" Nov 1 00:22:57.796963 systemd[1]: Started cri-containerd-30b510c37f0b1574fc79f633919bb27a4803857792a21085c279683ddc6e1441.scope - libcontainer container 30b510c37f0b1574fc79f633919bb27a4803857792a21085c279683ddc6e1441. Nov 1 00:22:57.835526 containerd[1505]: time="2025-11-01T00:22:57.835485355Z" level=info msg="StartContainer for \"30b510c37f0b1574fc79f633919bb27a4803857792a21085c279683ddc6e1441\" returns successfully" Nov 1 00:22:58.177995 containerd[1505]: time="2025-11-01T00:22:58.177900539Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:58.180160 containerd[1505]: time="2025-11-01T00:22:58.180068865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:58.180467 containerd[1505]: time="2025-11-01T00:22:58.180196006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:22:58.180555 kubelet[2652]: E1101 00:22:58.180399 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:58.180555 kubelet[2652]: E1101 00:22:58.180467 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:58.180763 kubelet[2652]: E1101 00:22:58.180628 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:58.183922 containerd[1505]: time="2025-11-01T00:22:58.183807677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:58.446912 kubelet[2652]: E1101 00:22:58.446222 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:22:58.449695 kubelet[2652]: E1101 00:22:58.449592 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:22:58.458273 kubelet[2652]: I1101 00:22:58.458114 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6rqgg" podStartSLOduration=40.458090804 podStartE2EDuration="40.458090804s" podCreationTimestamp="2025-11-01 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:58.457546757 +0000 UTC m=+46.597059640" watchObservedRunningTime="2025-11-01 00:22:58.458090804 +0000 UTC m=+46.597603677" Nov 1 00:22:58.470163 systemd-networkd[1391]: cali6eb9fa905f7: Gained IPv6LL Nov 1 00:22:58.615059 containerd[1505]: time="2025-11-01T00:22:58.615002527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:58.616265 containerd[1505]: time="2025-11-01T00:22:58.616222532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:58.616353 containerd[1505]: time="2025-11-01T00:22:58.616323077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:22:58.616566 kubelet[2652]: E1101 00:22:58.616509 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:58.616705 kubelet[2652]: E1101 00:22:58.616567 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:58.617085 kubelet[2652]: E1101 00:22:58.616985 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:58.618376 kubelet[2652]: E1101 00:22:58.618264 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:59.027780 containerd[1505]: time="2025-11-01T00:22:59.027491191Z" level=info msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" Nov 1 00:22:59.046944 systemd-networkd[1391]: calid83019da844: Gained IPv6LL Nov 1 00:22:59.117313 kubelet[2652]: I1101 00:22:59.116639 2652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gpnbt" podStartSLOduration=41.116619952 podStartE2EDuration="41.116619952s" podCreationTimestamp="2025-11-01 00:22:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:58.573927706 +0000 UTC m=+46.713440549" watchObservedRunningTime="2025-11-01 00:22:59.116619952 +0000 UTC m=+47.256132795" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.118 [INFO][4900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.118 [INFO][4900] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" iface="eth0" netns="/var/run/netns/cni-ecde9327-2f95-9886-a150-2537845e56f5" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.118 [INFO][4900] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" iface="eth0" netns="/var/run/netns/cni-ecde9327-2f95-9886-a150-2537845e56f5" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.119 [INFO][4900] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" iface="eth0" netns="/var/run/netns/cni-ecde9327-2f95-9886-a150-2537845e56f5" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.119 [INFO][4900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.119 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.137 [INFO][4907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.137 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.137 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.143 [WARNING][4907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.143 [INFO][4907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.144 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:59.149853 containerd[1505]: 2025-11-01 00:22:59.146 [INFO][4900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:22:59.149853 containerd[1505]: time="2025-11-01T00:22:59.149766949Z" level=info msg="TearDown network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" successfully" Nov 1 00:22:59.149853 containerd[1505]: time="2025-11-01T00:22:59.149802010Z" level=info msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" returns successfully" Nov 1 00:22:59.152959 containerd[1505]: time="2025-11-01T00:22:59.150964623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-lwt2w,Uid:5bfe0f66-8e86-4d9f-b0e9-32499fee7221,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:22:59.151048 systemd[1]: run-netns-cni\x2decde9327\x2d2f95\x2d9886\x2da150\x2d2537845e56f5.mount: Deactivated successfully. Nov 1 00:22:59.294246 systemd-networkd[1391]: calie2868881258: Link UP Nov 1 00:22:59.299374 systemd-networkd[1391]: calie2868881258: Gained carrier Nov 1 00:22:59.302336 systemd-networkd[1391]: cali024e77ed1b6: Gained IPv6LL Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.197 [INFO][4914] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.208 [INFO][4914] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0 calico-apiserver-7b55fd6955- calico-apiserver 5bfe0f66-8e86-4d9f-b0e9-32499fee7221 1033 0 2025-11-01 00:22:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b55fd6955 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-a2a464dc28 calico-apiserver-7b55fd6955-lwt2w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2868881258 [] [] }} ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.208 [INFO][4914] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.236 [INFO][4925] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" HandleID="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.236 [INFO][4925] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" HandleID="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-a2a464dc28", "pod":"calico-apiserver-7b55fd6955-lwt2w", "timestamp":"2025-11-01 00:22:59.236677754 +0000 UTC"}, Hostname:"ci-4081-3-6-n-a2a464dc28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.236 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.236 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.236 [INFO][4925] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-a2a464dc28' Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.243 [INFO][4925] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.249 [INFO][4925] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.255 [INFO][4925] ipam/ipam.go 511: Trying affinity for 192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.257 [INFO][4925] ipam/ipam.go 158: Attempting to load block cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.260 [INFO][4925] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.104.64/26 host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.261 [INFO][4925] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.104.64/26 handle="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.263 [INFO][4925] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756 Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.272 [INFO][4925] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.104.64/26 handle="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.287 [INFO][4925] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.104.72/26] block=192.168.104.64/26 handle="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.287 [INFO][4925] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.104.72/26] handle="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" host="ci-4081-3-6-n-a2a464dc28" Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.287 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:22:59.318441 containerd[1505]: 2025-11-01 00:22:59.287 [INFO][4925] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.104.72/26] IPv6=[] ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" HandleID="k8s-pod-network.26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.292 [INFO][4914] cni-plugin/k8s.go 418: Populated endpoint ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bfe0f66-8e86-4d9f-b0e9-32499fee7221", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"", Pod:"calico-apiserver-7b55fd6955-lwt2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2868881258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.292 [INFO][4914] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.104.72/32] ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.292 [INFO][4914] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2868881258 ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.294 [INFO][4914] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.294 [INFO][4914] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bfe0f66-8e86-4d9f-b0e9-32499fee7221", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756", Pod:"calico-apiserver-7b55fd6955-lwt2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2868881258", MAC:"2e:e3:2d:16:ff:ff", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:22:59.322136 containerd[1505]: 2025-11-01 00:22:59.311 [INFO][4914] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756" Namespace="calico-apiserver" Pod="calico-apiserver-7b55fd6955-lwt2w" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:22:59.349021 containerd[1505]: time="2025-11-01T00:22:59.348739146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:59.349021 containerd[1505]: time="2025-11-01T00:22:59.348976872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:59.349307 containerd[1505]: time="2025-11-01T00:22:59.349171492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:59.349455 containerd[1505]: time="2025-11-01T00:22:59.349399882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:59.383285 systemd[1]: Started cri-containerd-26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756.scope - libcontainer container 26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756. Nov 1 00:22:59.440657 containerd[1505]: time="2025-11-01T00:22:59.440602179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b55fd6955-lwt2w,Uid:5bfe0f66-8e86-4d9f-b0e9-32499fee7221,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756\"" Nov 1 00:22:59.443190 containerd[1505]: time="2025-11-01T00:22:59.443007463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:59.451485 kubelet[2652]: E1101 00:22:59.451447 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:22:59.870925 containerd[1505]: time="2025-11-01T00:22:59.870623829Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:59.872768 containerd[1505]: time="2025-11-01T00:22:59.872568228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:59.872768 containerd[1505]: time="2025-11-01T00:22:59.872645754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:22:59.873067 kubelet[2652]: E1101 00:22:59.872981 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:59.873187 kubelet[2652]: E1101 00:22:59.873069 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:59.873357 kubelet[2652]: E1101 00:22:59.873276 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45bx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:59.874694 kubelet[2652]: E1101 00:22:59.874599 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:00.454880 kubelet[2652]: E1101 00:23:00.454645 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:00.462350 kubelet[2652]: I1101 00:23:00.459261 2652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:00.517926 systemd-networkd[1391]: calie2868881258: Gained IPv6LL Nov 1 00:23:01.511728 kernel: bpftool[5044]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:23:01.793826 systemd-networkd[1391]: vxlan.calico: Link UP Nov 1 00:23:01.793835 systemd-networkd[1391]: vxlan.calico: Gained carrier Nov 1 00:23:03.014002 systemd-networkd[1391]: vxlan.calico: Gained IPv6LL Nov 1 00:23:06.034517 containerd[1505]: time="2025-11-01T00:23:06.034117442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:06.474732 containerd[1505]: time="2025-11-01T00:23:06.474532866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:06.475735 containerd[1505]: time="2025-11-01T00:23:06.475711675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:06.476010 containerd[1505]: time="2025-11-01T00:23:06.475795516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:06.476067 kubelet[2652]: E1101 00:23:06.476029 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:06.476334 kubelet[2652]: E1101 00:23:06.476073 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:06.476334 kubelet[2652]: E1101 00:23:06.476180 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d162083005e04424a0cd373354941ab6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:06.479143 containerd[1505]: time="2025-11-01T00:23:06.479128148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:06.913209 containerd[1505]: time="2025-11-01T00:23:06.913117074Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:06.915407 containerd[1505]: time="2025-11-01T00:23:06.915314086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:06.915514 containerd[1505]: time="2025-11-01T00:23:06.915445891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:06.915762 kubelet[2652]: E1101 00:23:06.915656 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:06.915874 kubelet[2652]: E1101 00:23:06.915770 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:06.916121 kubelet[2652]: E1101 00:23:06.915975 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:06.917934 kubelet[2652]: E1101 00:23:06.917845 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:23:10.029740 containerd[1505]: time="2025-11-01T00:23:10.029494933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:10.491480 containerd[1505]: time="2025-11-01T00:23:10.491400304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:10.493372 containerd[1505]: time="2025-11-01T00:23:10.493281055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:10.493496 containerd[1505]: time="2025-11-01T00:23:10.493376178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:10.493593 kubelet[2652]: E1101 00:23:10.493515 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:10.493593 kubelet[2652]: E1101 00:23:10.493579 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:10.495140 kubelet[2652]: E1101 00:23:10.493839 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:10.495538 kubelet[2652]: E1101 00:23:10.495459 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:23:12.061138 containerd[1505]: time="2025-11-01T00:23:12.060919374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:12.063859 containerd[1505]: time="2025-11-01T00:23:12.062272053Z" level=info msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.137 [WARNING][5160] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.137 [INFO][5160] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.137 [INFO][5160] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" iface="eth0" netns="" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.137 [INFO][5160] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.137 [INFO][5160] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.167 [INFO][5167] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.168 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.168 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.176 [WARNING][5167] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.177 [INFO][5167] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.179 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.183905 containerd[1505]: 2025-11-01 00:23:12.182 [INFO][5160] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.185117 containerd[1505]: time="2025-11-01T00:23:12.183947915Z" level=info msg="TearDown network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" successfully" Nov 1 00:23:12.185117 containerd[1505]: time="2025-11-01T00:23:12.183969895Z" level=info msg="StopPodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" returns successfully" Nov 1 00:23:12.185964 containerd[1505]: time="2025-11-01T00:23:12.185910242Z" level=info msg="RemovePodSandbox for \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" Nov 1 00:23:12.185964 containerd[1505]: time="2025-11-01T00:23:12.185965343Z" level=info msg="Forcibly stopping sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\"" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.216 [WARNING][5187] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" WorkloadEndpoint="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.216 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.216 [INFO][5187] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" iface="eth0" netns="" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.216 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.216 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.233 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.233 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.233 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.241 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.241 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" HandleID="k8s-pod-network.235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Workload="ci--4081--3--6--n--a2a464dc28-k8s-whisker--566ff8b8b7--gwg5w-eth0" Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.243 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.248281 containerd[1505]: 2025-11-01 00:23:12.245 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc" Nov 1 00:23:12.248281 containerd[1505]: time="2025-11-01T00:23:12.246894830Z" level=info msg="TearDown network for sandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" successfully" Nov 1 00:23:12.258471 containerd[1505]: time="2025-11-01T00:23:12.258421274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:12.258594 containerd[1505]: time="2025-11-01T00:23:12.258527668Z" level=info msg="RemovePodSandbox \"235608b4caad53d6f0bb29f019881a266b997a0912864aeb108c29a57825ccdc\" returns successfully" Nov 1 00:23:12.259222 containerd[1505]: time="2025-11-01T00:23:12.259180835Z" level=info msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.297 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce", Pod:"calico-apiserver-7b55fd6955-6t7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid980c389c04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.297 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.297 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" iface="eth0" netns="" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.297 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.297 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.318 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.318 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.318 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.326 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.326 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.328 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.332268 containerd[1505]: 2025-11-01 00:23:12.330 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.333942 containerd[1505]: time="2025-11-01T00:23:12.332306174Z" level=info msg="TearDown network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" successfully" Nov 1 00:23:12.333942 containerd[1505]: time="2025-11-01T00:23:12.332333955Z" level=info msg="StopPodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" returns successfully" Nov 1 00:23:12.333942 containerd[1505]: time="2025-11-01T00:23:12.333337750Z" level=info msg="RemovePodSandbox for \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" Nov 1 00:23:12.333942 containerd[1505]: time="2025-11-01T00:23:12.333359900Z" level=info msg="Forcibly stopping sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\"" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.375 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6298db4464152cfd9053e1ec08e324140083b3102444b286070aa97f89ca81ce", Pod:"calico-apiserver-7b55fd6955-6t7nj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid980c389c04", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.376 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.376 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" iface="eth0" netns="" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.376 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.376 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.409 [INFO][5240] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.410 [INFO][5240] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.410 [INFO][5240] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.417 [WARNING][5240] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.417 [INFO][5240] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" HandleID="k8s-pod-network.3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--6t7nj-eth0" Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.419 [INFO][5240] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.422590 containerd[1505]: 2025-11-01 00:23:12.421 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7" Nov 1 00:23:12.423944 containerd[1505]: time="2025-11-01T00:23:12.422614331Z" level=info msg="TearDown network for sandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" successfully" Nov 1 00:23:12.427375 containerd[1505]: time="2025-11-01T00:23:12.427332218Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:12.427421 containerd[1505]: time="2025-11-01T00:23:12.427404639Z" level=info msg="RemovePodSandbox \"3ad7605eca85c2ad7c5bdf86803f536b6e0a4ef28be314484ad4de151ffb07f7\" returns successfully" Nov 1 00:23:12.428003 containerd[1505]: time="2025-11-01T00:23:12.427972162Z" level=info msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.463 [WARNING][5256] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bfe0f66-8e86-4d9f-b0e9-32499fee7221", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756", Pod:"calico-apiserver-7b55fd6955-lwt2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2868881258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.463 [INFO][5256] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.463 [INFO][5256] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" iface="eth0" netns="" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.463 [INFO][5256] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.463 [INFO][5256] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.481 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.481 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.482 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.488 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.488 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.489 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.494970 containerd[1505]: 2025-11-01 00:23:12.491 [INFO][5256] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.495805 containerd[1505]: time="2025-11-01T00:23:12.494995477Z" level=info msg="TearDown network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" successfully" Nov 1 00:23:12.495805 containerd[1505]: time="2025-11-01T00:23:12.495021665Z" level=info msg="StopPodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" returns successfully" Nov 1 00:23:12.495805 containerd[1505]: time="2025-11-01T00:23:12.495491419Z" level=info msg="RemovePodSandbox for \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" Nov 1 00:23:12.495805 containerd[1505]: time="2025-11-01T00:23:12.495525291Z" level=info msg="Forcibly stopping sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\"" Nov 1 00:23:12.521375 containerd[1505]: time="2025-11-01T00:23:12.520102211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:12.521567 containerd[1505]: time="2025-11-01T00:23:12.521468043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:12.522093 containerd[1505]: time="2025-11-01T00:23:12.521599142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:12.522171 kubelet[2652]: E1101 00:23:12.521772 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:12.522171 kubelet[2652]: E1101 00:23:12.521820 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:12.522171 kubelet[2652]: E1101 00:23:12.521940 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:12.523217 kubelet[2652]: E1101 00:23:12.522999 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.541 [WARNING][5277] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0", GenerateName:"calico-apiserver-7b55fd6955-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bfe0f66-8e86-4d9f-b0e9-32499fee7221", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b55fd6955", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"26de82470e1185bb0bb8bc1209999e21e4cdf3eee4e4e1bc9594f255931a6756", Pod:"calico-apiserver-7b55fd6955-lwt2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.104.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2868881258", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.541 [INFO][5277] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.541 [INFO][5277] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" iface="eth0" netns="" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.542 [INFO][5277] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.542 [INFO][5277] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.573 [INFO][5284] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.573 [INFO][5284] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.574 [INFO][5284] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.580 [WARNING][5284] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.580 [INFO][5284] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" HandleID="k8s-pod-network.0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--apiserver--7b55fd6955--lwt2w-eth0" Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.582 [INFO][5284] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.587883 containerd[1505]: 2025-11-01 00:23:12.584 [INFO][5277] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47" Nov 1 00:23:12.587883 containerd[1505]: time="2025-11-01T00:23:12.586565327Z" level=info msg="TearDown network for sandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" successfully" Nov 1 00:23:12.592568 containerd[1505]: time="2025-11-01T00:23:12.592528959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:12.592780 containerd[1505]: time="2025-11-01T00:23:12.592739222Z" level=info msg="RemovePodSandbox \"0f5bd49cd9991e4089f9a71e8868630f869f8e7ad40889a0029d73a08ac43b47\" returns successfully" Nov 1 00:23:12.593310 containerd[1505]: time="2025-11-01T00:23:12.593278803Z" level=info msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.632 [WARNING][5298] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae9e8348-8b23-4471-92e0-30ed8445c882", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b", Pod:"csi-node-driver-4lkfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali024e77ed1b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.632 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.632 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" iface="eth0" netns="" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.632 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.632 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.656 [INFO][5306] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.656 [INFO][5306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.656 [INFO][5306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.662 [WARNING][5306] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.662 [INFO][5306] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.664 [INFO][5306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.669039 containerd[1505]: 2025-11-01 00:23:12.666 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.669039 containerd[1505]: time="2025-11-01T00:23:12.668916890Z" level=info msg="TearDown network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" successfully" Nov 1 00:23:12.669039 containerd[1505]: time="2025-11-01T00:23:12.668943167Z" level=info msg="StopPodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" returns successfully" Nov 1 00:23:12.669538 containerd[1505]: time="2025-11-01T00:23:12.669480985Z" level=info msg="RemovePodSandbox for \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" Nov 1 00:23:12.669572 containerd[1505]: time="2025-11-01T00:23:12.669542677Z" level=info msg="Forcibly stopping sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\"" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.702 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ae9e8348-8b23-4471-92e0-30ed8445c882", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"768b6b444aa212bfaa5a45ddf31e2eabc6f0e43bb2f6e2132981d83d55b9bc0b", Pod:"csi-node-driver-4lkfc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.104.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali024e77ed1b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.702 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.702 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" iface="eth0" netns="" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.702 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.702 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.722 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.722 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.722 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.728 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.728 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" HandleID="k8s-pod-network.77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Workload="ci--4081--3--6--n--a2a464dc28-k8s-csi--node--driver--4lkfc-eth0" Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.729 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.734760 containerd[1505]: 2025-11-01 00:23:12.731 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665" Nov 1 00:23:12.734760 containerd[1505]: time="2025-11-01T00:23:12.734150352Z" level=info msg="TearDown network for sandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" successfully" Nov 1 00:23:12.738899 containerd[1505]: time="2025-11-01T00:23:12.738828716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:12.738899 containerd[1505]: time="2025-11-01T00:23:12.738897120Z" level=info msg="RemovePodSandbox \"77c594c981957996d7d8dd986bc0ea8090506eaaab0cb5c236606a0f3cdc3665\" returns successfully" Nov 1 00:23:12.739624 containerd[1505]: time="2025-11-01T00:23:12.739573180Z" level=info msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.778 [WARNING][5342] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0", GenerateName:"calico-kube-controllers-6d9dfb6c85-", Namespace:"calico-system", SelfLink:"", UID:"98898523-1f05-472a-90a7-fe467ee6a22e", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9dfb6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6", Pod:"calico-kube-controllers-6d9dfb6c85-btn4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bd76764ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.778 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.778 [INFO][5342] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" iface="eth0" netns="" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.778 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.778 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.799 [INFO][5350] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.799 [INFO][5350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.799 [INFO][5350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.806 [WARNING][5350] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.806 [INFO][5350] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.808 [INFO][5350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.810873 containerd[1505]: 2025-11-01 00:23:12.809 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.811715 containerd[1505]: time="2025-11-01T00:23:12.810902875Z" level=info msg="TearDown network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" successfully" Nov 1 00:23:12.811715 containerd[1505]: time="2025-11-01T00:23:12.810926358Z" level=info msg="StopPodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" returns successfully" Nov 1 00:23:12.811715 containerd[1505]: time="2025-11-01T00:23:12.811319973Z" level=info msg="RemovePodSandbox for \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" Nov 1 00:23:12.811715 containerd[1505]: time="2025-11-01T00:23:12.811342594Z" level=info msg="Forcibly stopping sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\"" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.837 [WARNING][5364] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0", GenerateName:"calico-kube-controllers-6d9dfb6c85-", Namespace:"calico-system", SelfLink:"", UID:"98898523-1f05-472a-90a7-fe467ee6a22e", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d9dfb6c85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"81f6cf20139e4ccf7750cc41f8ca50101673c5fffb381fcb4bed9360835108c6", Pod:"calico-kube-controllers-6d9dfb6c85-btn4p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.104.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2bd76764ea2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.837 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.837 [INFO][5364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" iface="eth0" netns="" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.837 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.837 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.853 [INFO][5372] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.853 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.853 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.860 [WARNING][5372] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.860 [INFO][5372] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" HandleID="k8s-pod-network.42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Workload="ci--4081--3--6--n--a2a464dc28-k8s-calico--kube--controllers--6d9dfb6c85--btn4p-eth0" Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.862 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.865370 containerd[1505]: 2025-11-01 00:23:12.863 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204" Nov 1 00:23:12.865370 containerd[1505]: time="2025-11-01T00:23:12.865333628Z" level=info msg="TearDown network for sandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" successfully" Nov 1 00:23:12.870096 containerd[1505]: time="2025-11-01T00:23:12.870047286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:12.870096 containerd[1505]: time="2025-11-01T00:23:12.870094573Z" level=info msg="RemovePodSandbox \"42cc1398bd7d94878589d77d0eecd8399357ae09ff84e68918e7acef6ff9a204\" returns successfully" Nov 1 00:23:12.870521 containerd[1505]: time="2025-11-01T00:23:12.870470255Z" level=info msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.901 [WARNING][5386] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"24c1a2ed-5b74-4228-b907-6de81bcc9c41", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa", Pod:"coredns-668d6bf9bc-gpnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid83019da844", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.901 [INFO][5386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.901 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" iface="eth0" netns="" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.901 [INFO][5386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.901 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.918 [INFO][5393] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.918 [INFO][5393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.918 [INFO][5393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.923 [WARNING][5393] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.924 [INFO][5393] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.925 [INFO][5393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.928210 containerd[1505]: 2025-11-01 00:23:12.926 [INFO][5386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.928896 containerd[1505]: time="2025-11-01T00:23:12.928265726Z" level=info msg="TearDown network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" successfully" Nov 1 00:23:12.928896 containerd[1505]: time="2025-11-01T00:23:12.928289258Z" level=info msg="StopPodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" returns successfully" Nov 1 00:23:12.928896 containerd[1505]: time="2025-11-01T00:23:12.928860537Z" level=info msg="RemovePodSandbox for \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" Nov 1 00:23:12.928896 containerd[1505]: time="2025-11-01T00:23:12.928883458Z" level=info msg="Forcibly stopping sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\"" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.965 [WARNING][5407] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"24c1a2ed-5b74-4228-b907-6de81bcc9c41", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"2fc90e6c219cfea84579add39e5cc548eb9928eae511bf507fd30213bf6875fa", Pod:"coredns-668d6bf9bc-gpnbt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid83019da844", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.966 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.966 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" iface="eth0" netns="" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.966 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.966 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.983 [INFO][5415] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.983 [INFO][5415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.983 [INFO][5415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.992 [WARNING][5415] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.992 [INFO][5415] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" HandleID="k8s-pod-network.91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--gpnbt-eth0" Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.994 [INFO][5415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.998215 containerd[1505]: 2025-11-01 00:23:12.995 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1" Nov 1 00:23:12.998730 containerd[1505]: time="2025-11-01T00:23:12.998272465Z" level=info msg="TearDown network for sandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" successfully" Nov 1 00:23:13.003784 containerd[1505]: time="2025-11-01T00:23:13.003740304Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:13.003859 containerd[1505]: time="2025-11-01T00:23:13.003831922Z" level=info msg="RemovePodSandbox \"91888ef184450fee6c42e7f7123ce4657ad203c5a670289554a77d4bccc5e3e1\" returns successfully" Nov 1 00:23:13.004426 containerd[1505]: time="2025-11-01T00:23:13.004395509Z" level=info msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.046 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d4fd33c-57a2-484f-b033-ef3d888b08dc", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395", Pod:"goldmane-666569f655-62wdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a6fb0debf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.046 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.046 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" iface="eth0" netns="" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.046 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.046 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.070 [INFO][5436] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.070 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.070 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.077 [WARNING][5436] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.077 [INFO][5436] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.078 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:13.081903 containerd[1505]: 2025-11-01 00:23:13.080 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.083376 containerd[1505]: time="2025-11-01T00:23:13.081956622Z" level=info msg="TearDown network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" successfully" Nov 1 00:23:13.083376 containerd[1505]: time="2025-11-01T00:23:13.081988541Z" level=info msg="StopPodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" returns successfully" Nov 1 00:23:13.083376 containerd[1505]: time="2025-11-01T00:23:13.082909088Z" level=info msg="RemovePodSandbox for \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" Nov 1 00:23:13.083376 containerd[1505]: time="2025-11-01T00:23:13.082932040Z" level=info msg="Forcibly stopping sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\"" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.116 [WARNING][5450] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9d4fd33c-57a2-484f-b033-ef3d888b08dc", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"f00fb2abc81e929f50bb9fb6409b63e7ca93be1416a146485403de80b2c35395", Pod:"goldmane-666569f655-62wdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.104.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5a6fb0debf1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.116 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.116 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" iface="eth0" netns="" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.117 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.117 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.135 [INFO][5457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.135 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.135 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.143 [WARNING][5457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.143 [INFO][5457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" HandleID="k8s-pod-network.d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Workload="ci--4081--3--6--n--a2a464dc28-k8s-goldmane--666569f655--62wdq-eth0" Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.145 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:13.150228 containerd[1505]: 2025-11-01 00:23:13.147 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042" Nov 1 00:23:13.150228 containerd[1505]: time="2025-11-01T00:23:13.150113960Z" level=info msg="TearDown network for sandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" successfully" Nov 1 00:23:13.157714 containerd[1505]: time="2025-11-01T00:23:13.156928829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:13.157714 containerd[1505]: time="2025-11-01T00:23:13.157086896Z" level=info msg="RemovePodSandbox \"d4beebb20aa5df21a739788caaf4928def59fca2eede50b2df5018affd65e042\" returns successfully" Nov 1 00:23:13.158102 containerd[1505]: time="2025-11-01T00:23:13.158054210Z" level=info msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.201 [WARNING][5471] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca02817f-7150-4fe5-a77c-3db57eb2bbb9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952", Pod:"coredns-668d6bf9bc-6rqgg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb9fa905f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.201 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.201 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" iface="eth0" netns="" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.201 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.201 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.221 [INFO][5478] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.221 [INFO][5478] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.221 [INFO][5478] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.227 [WARNING][5478] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.227 [INFO][5478] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.228 [INFO][5478] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:13.231835 containerd[1505]: 2025-11-01 00:23:13.230 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.232369 containerd[1505]: time="2025-11-01T00:23:13.232321371Z" level=info msg="TearDown network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" successfully" Nov 1 00:23:13.232369 containerd[1505]: time="2025-11-01T00:23:13.232354422Z" level=info msg="StopPodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" returns successfully" Nov 1 00:23:13.232926 containerd[1505]: time="2025-11-01T00:23:13.232897952Z" level=info msg="RemovePodSandbox for \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" Nov 1 00:23:13.232995 containerd[1505]: time="2025-11-01T00:23:13.232926934Z" level=info msg="Forcibly stopping sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\"" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.262 [WARNING][5492] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ca02817f-7150-4fe5-a77c-3db57eb2bbb9", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-a2a464dc28", ContainerID:"6b4c8ce52917a41b0caddcede149a6b8a81d5ee282108068d3c7223c941b8952", Pod:"coredns-668d6bf9bc-6rqgg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.104.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6eb9fa905f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.262 [INFO][5492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.262 [INFO][5492] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" iface="eth0" netns="" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.262 [INFO][5492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.262 [INFO][5492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.289 [INFO][5499] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.289 [INFO][5499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.289 [INFO][5499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.295 [WARNING][5499] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.295 [INFO][5499] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" HandleID="k8s-pod-network.36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Workload="ci--4081--3--6--n--a2a464dc28-k8s-coredns--668d6bf9bc--6rqgg-eth0" Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.296 [INFO][5499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:13.300801 containerd[1505]: 2025-11-01 00:23:13.298 [INFO][5492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca" Nov 1 00:23:13.301870 containerd[1505]: time="2025-11-01T00:23:13.300841911Z" level=info msg="TearDown network for sandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" successfully" Nov 1 00:23:13.306221 containerd[1505]: time="2025-11-01T00:23:13.306177654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:13.306260 containerd[1505]: time="2025-11-01T00:23:13.306240780Z" level=info msg="RemovePodSandbox \"36cbaac7e393f3e5ce0fb965509b1793168a2a11c3e096f76072a5b143d284ca\" returns successfully" Nov 1 00:23:14.033537 containerd[1505]: time="2025-11-01T00:23:14.032938897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:14.489872 containerd[1505]: time="2025-11-01T00:23:14.489741174Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:14.492824 containerd[1505]: time="2025-11-01T00:23:14.492649947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:14.492824 containerd[1505]: time="2025-11-01T00:23:14.492727499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:14.493303 kubelet[2652]: E1101 00:23:14.492943 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:14.493303 kubelet[2652]: E1101 00:23:14.493007 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:14.494650 kubelet[2652]: E1101 00:23:14.493659 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqq5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:14.494936 containerd[1505]: time="2025-11-01T00:23:14.493469334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:14.495260 kubelet[2652]: E1101 00:23:14.495197 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:23:14.927089 containerd[1505]: time="2025-11-01T00:23:14.926888495Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:14.929191 containerd[1505]: time="2025-11-01T00:23:14.929112927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:14.929602 containerd[1505]: time="2025-11-01T00:23:14.929182706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:14.929942 kubelet[2652]: E1101 00:23:14.929835 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:14.929942 kubelet[2652]: E1101 00:23:14.929926 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:14.930810 kubelet[2652]: E1101 00:23:14.930119 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:14.933896 containerd[1505]: time="2025-11-01T00:23:14.933862121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:15.373124 containerd[1505]: time="2025-11-01T00:23:15.373019480Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:15.374848 containerd[1505]: time="2025-11-01T00:23:15.374765546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:15.374969 containerd[1505]: time="2025-11-01T00:23:15.374876770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:15.375165 kubelet[2652]: E1101 00:23:15.375078 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:15.375165 kubelet[2652]: E1101 00:23:15.375152 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:15.375386 kubelet[2652]: E1101 00:23:15.375315 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:15.377140 kubelet[2652]: E1101 00:23:15.377014 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:23:16.029394 containerd[1505]: time="2025-11-01T00:23:16.029300333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:16.468808 containerd[1505]: time="2025-11-01T00:23:16.468654428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:16.470955 containerd[1505]: time="2025-11-01T00:23:16.470796126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:16.470955 containerd[1505]: time="2025-11-01T00:23:16.470867177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:16.471190 kubelet[2652]: E1101 00:23:16.471088 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:16.471190 kubelet[2652]: E1101 00:23:16.471155 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:16.472293 kubelet[2652]: E1101 00:23:16.471461 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45bx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:16.473605 kubelet[2652]: E1101 00:23:16.473516 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:17.033703 kubelet[2652]: E1101 00:23:17.030228 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:23:22.030988 kubelet[2652]: E1101 00:23:22.030883 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:23:27.030290 kubelet[2652]: E1101 00:23:27.029404 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:23:27.030290 kubelet[2652]: E1101 00:23:27.029411 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:23:27.032084 kubelet[2652]: E1101 00:23:27.031999 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:23:28.030834 kubelet[2652]: E1101 00:23:28.030260 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:32.031939 containerd[1505]: time="2025-11-01T00:23:32.031837027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:32.464784 containerd[1505]: time="2025-11-01T00:23:32.464737624Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:32.466097 containerd[1505]: time="2025-11-01T00:23:32.466037180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:32.466179 containerd[1505]: time="2025-11-01T00:23:32.466139433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:32.466345 kubelet[2652]: E1101 00:23:32.466310 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:32.466649 kubelet[2652]: E1101 00:23:32.466356 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:32.466649 kubelet[2652]: E1101 00:23:32.466477 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d162083005e04424a0cd373354941ab6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:32.468512 containerd[1505]: time="2025-11-01T00:23:32.468486570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:32.898298 containerd[1505]: time="2025-11-01T00:23:32.898161085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:32.900705 containerd[1505]: time="2025-11-01T00:23:32.900372187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:32.900705 containerd[1505]: time="2025-11-01T00:23:32.900540183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:32.901971 kubelet[2652]: E1101 00:23:32.900927 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:32.901971 kubelet[2652]: E1101 00:23:32.901001 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:32.901971 kubelet[2652]: E1101 00:23:32.901200 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:32.902863 kubelet[2652]: E1101 00:23:32.902413 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:23:37.030525 containerd[1505]: time="2025-11-01T00:23:37.030451885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:37.636196 containerd[1505]: time="2025-11-01T00:23:37.636078744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:37.638268 containerd[1505]: time="2025-11-01T00:23:37.638200950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:37.638583 containerd[1505]: time="2025-11-01T00:23:37.638275122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:37.638914 kubelet[2652]: E1101 00:23:37.638830 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:37.640239 kubelet[2652]: E1101 00:23:37.639155 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:37.640239 kubelet[2652]: E1101 00:23:37.640090 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.643040 kubelet[2652]: E1101 00:23:37.642959 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:23:38.030943 containerd[1505]: time="2025-11-01T00:23:38.030573634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:38.465678 containerd[1505]: time="2025-11-01T00:23:38.465608419Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:38.467109 containerd[1505]: time="2025-11-01T00:23:38.467075760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:38.467680 containerd[1505]: time="2025-11-01T00:23:38.467158566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:38.467712 kubelet[2652]: E1101 00:23:38.467503 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:38.467712 kubelet[2652]: E1101 00:23:38.467647 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:38.468459 containerd[1505]: time="2025-11-01T00:23:38.468437420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:38.468676 kubelet[2652]: E1101 00:23:38.468542 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqq5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:38.469921 kubelet[2652]: E1101 00:23:38.469893 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:23:38.923718 containerd[1505]: time="2025-11-01T00:23:38.923572488Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:38.926697 containerd[1505]: time="2025-11-01T00:23:38.925257341Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:38.926697 containerd[1505]: time="2025-11-01T00:23:38.925368581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:38.926897 kubelet[2652]: E1101 00:23:38.925626 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:38.926897 kubelet[2652]: E1101 00:23:38.925707 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:38.926897 kubelet[2652]: E1101 00:23:38.925870 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:38.927456 kubelet[2652]: E1101 00:23:38.927422 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:23:39.030483 containerd[1505]: time="2025-11-01T00:23:39.030420520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:39.475691 containerd[1505]: time="2025-11-01T00:23:39.475582405Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.477564 containerd[1505]: time="2025-11-01T00:23:39.477500964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:39.477818 containerd[1505]: time="2025-11-01T00:23:39.477626492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:39.477864 kubelet[2652]: E1101 00:23:39.477784 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:39.477864 kubelet[2652]: E1101 00:23:39.477851 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:39.478239 kubelet[2652]: E1101 00:23:39.478146 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.479143 containerd[1505]: time="2025-11-01T00:23:39.479097823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:39.921902 containerd[1505]: time="2025-11-01T00:23:39.921847322Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.923492 containerd[1505]: time="2025-11-01T00:23:39.923440113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:39.923584 containerd[1505]: time="2025-11-01T00:23:39.923547577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:39.923825 kubelet[2652]: E1101 00:23:39.923787 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.923870 kubelet[2652]: E1101 00:23:39.923843 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.924174 kubelet[2652]: E1101 00:23:39.924109 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45bx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.925787 containerd[1505]: time="2025-11-01T00:23:39.925755465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:39.926385 kubelet[2652]: E1101 00:23:39.926335 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:40.359886 containerd[1505]: time="2025-11-01T00:23:40.359817959Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.363684 containerd[1505]: time="2025-11-01T00:23:40.361389232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:40.363684 containerd[1505]: time="2025-11-01T00:23:40.361484733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:40.363841 kubelet[2652]: E1101 00:23:40.362842 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.363841 kubelet[2652]: E1101 00:23:40.362901 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:40.363841 kubelet[2652]: E1101 00:23:40.363040 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.364627 kubelet[2652]: E1101 00:23:40.364565 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:23:45.031588 kubelet[2652]: E1101 00:23:45.031492 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:23:49.035819 kubelet[2652]: E1101 00:23:49.035515 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:23:52.030010 kubelet[2652]: E1101 00:23:52.029364 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:23:53.029565 kubelet[2652]: E1101 00:23:53.029405 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:23:53.030214 kubelet[2652]: E1101 00:23:53.030169 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:23:53.040995 kubelet[2652]: E1101 00:23:53.040931 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:23:53.377010 systemd[1]: Started sshd@7-95.217.181.13:22-147.75.109.163:33798.service - OpenSSH per-connection server daemon (147.75.109.163:33798). Nov 1 00:23:54.436871 sshd[5553]: Accepted publickey for core from 147.75.109.163 port 33798 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:23:54.442128 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:54.453866 systemd-logind[1487]: New session 8 of user core. Nov 1 00:23:54.460059 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:23:55.744495 sshd[5553]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.750739 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:23:55.751605 systemd[1]: sshd@7-95.217.181.13:22-147.75.109.163:33798.service: Deactivated successfully. Nov 1 00:23:55.754849 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:23:55.761504 systemd-logind[1487]: Removed session 8. Nov 1 00:23:57.030258 kubelet[2652]: E1101 00:23:57.030104 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:00.031402 kubelet[2652]: E1101 00:24:00.030862 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:24:00.955129 systemd[1]: Started sshd@8-95.217.181.13:22-147.75.109.163:55230.service - OpenSSH per-connection server daemon (147.75.109.163:55230). Nov 1 00:24:02.076768 sshd[5590]: Accepted publickey for core from 147.75.109.163 port 55230 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:02.082005 sshd[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:02.091884 systemd-logind[1487]: New session 9 of user core. Nov 1 00:24:02.100057 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:24:02.957037 sshd[5590]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:02.962043 systemd[1]: sshd@8-95.217.181.13:22-147.75.109.163:55230.service: Deactivated successfully. Nov 1 00:24:02.965317 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:24:02.966474 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:24:02.967732 systemd-logind[1487]: Removed session 9. Nov 1 00:24:03.153233 systemd[1]: Started sshd@9-95.217.181.13:22-147.75.109.163:55236.service - OpenSSH per-connection server daemon (147.75.109.163:55236). Nov 1 00:24:04.289412 sshd[5604]: Accepted publickey for core from 147.75.109.163 port 55236 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:04.292574 sshd[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:04.300365 systemd-logind[1487]: New session 10 of user core. Nov 1 00:24:04.307909 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:24:05.030689 kubelet[2652]: E1101 00:24:05.029888 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:24:05.195021 sshd[5604]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:05.203122 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:24:05.204570 systemd[1]: sshd@9-95.217.181.13:22-147.75.109.163:55236.service: Deactivated successfully. Nov 1 00:24:05.208276 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:24:05.210129 systemd-logind[1487]: Removed session 10. Nov 1 00:24:05.351873 systemd[1]: Started sshd@10-95.217.181.13:22-147.75.109.163:55246.service - OpenSSH per-connection server daemon (147.75.109.163:55246). Nov 1 00:24:06.032798 kubelet[2652]: E1101 00:24:06.032261 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:24:06.371964 sshd[5616]: Accepted publickey for core from 147.75.109.163 port 55246 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:06.376055 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:06.390024 systemd-logind[1487]: New session 11 of user core. Nov 1 00:24:06.399902 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:24:07.029574 kubelet[2652]: E1101 00:24:07.029517 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:24:07.029574 kubelet[2652]: E1101 00:24:07.029576 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:24:07.175298 sshd[5616]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:07.177748 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:24:07.179450 systemd[1]: sshd@10-95.217.181.13:22-147.75.109.163:55246.service: Deactivated successfully. Nov 1 00:24:07.181381 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:24:07.183304 systemd-logind[1487]: Removed session 11. Nov 1 00:24:09.050606 kubelet[2652]: E1101 00:24:09.050300 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:11.029815 kubelet[2652]: E1101 00:24:11.028745 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:24:12.389922 systemd[1]: Started sshd@11-95.217.181.13:22-147.75.109.163:38964.service - OpenSSH per-connection server daemon (147.75.109.163:38964). Nov 1 00:24:13.502391 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 38964 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:13.504372 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:13.512554 systemd-logind[1487]: New session 12 of user core. Nov 1 00:24:13.516977 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:24:14.330782 sshd[5636]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:14.338339 systemd[1]: sshd@11-95.217.181.13:22-147.75.109.163:38964.service: Deactivated successfully. Nov 1 00:24:14.341337 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:24:14.344805 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:24:14.347779 systemd-logind[1487]: Removed session 12. Nov 1 00:24:16.029649 kubelet[2652]: E1101 00:24:16.029562 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:24:17.031468 kubelet[2652]: E1101 00:24:17.030295 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:24:18.028767 kubelet[2652]: E1101 00:24:18.028601 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:24:18.030947 kubelet[2652]: E1101 00:24:18.030346 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:24:19.500105 systemd[1]: Started sshd@12-95.217.181.13:22-147.75.109.163:38972.service - OpenSSH per-connection server daemon (147.75.109.163:38972). Nov 1 00:24:20.028889 containerd[1505]: time="2025-11-01T00:24:20.028834393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:20.501762 sshd[5651]: Accepted publickey for core from 147.75.109.163 port 38972 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:20.502809 sshd[5651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:20.504685 containerd[1505]: time="2025-11-01T00:24:20.503111283Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:20.507416 containerd[1505]: time="2025-11-01T00:24:20.507265035Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:20.507416 containerd[1505]: time="2025-11-01T00:24:20.507377804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:20.509512 kubelet[2652]: E1101 00:24:20.507643 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:20.509512 kubelet[2652]: E1101 00:24:20.507726 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:20.509512 kubelet[2652]: E1101 00:24:20.507863 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d162083005e04424a0cd373354941ab6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:20.513309 containerd[1505]: time="2025-11-01T00:24:20.513214670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:20.515995 systemd-logind[1487]: New session 13 of user core. Nov 1 00:24:20.522879 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:24:20.944968 containerd[1505]: time="2025-11-01T00:24:20.944871970Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:20.947173 containerd[1505]: time="2025-11-01T00:24:20.947045535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:20.947501 containerd[1505]: time="2025-11-01T00:24:20.947010778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:20.948282 kubelet[2652]: E1101 00:24:20.947982 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:20.948282 kubelet[2652]: E1101 00:24:20.948069 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:20.949152 kubelet[2652]: E1101 00:24:20.948889 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w7xlr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-784c7f6667-sp4fm_calico-system(6d4596f8-201b-4071-856f-d068e8d1a4cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:20.950401 kubelet[2652]: E1101 00:24:20.950278 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:21.311984 sshd[5651]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:21.315894 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:24:21.318548 systemd[1]: sshd@12-95.217.181.13:22-147.75.109.163:38972.service: Deactivated successfully. Nov 1 00:24:21.321054 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:24:21.322779 systemd-logind[1487]: Removed session 13. Nov 1 00:24:24.659025 systemd[1]: run-containerd-runc-k8s.io-2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23-runc.wPAy7W.mount: Deactivated successfully. Nov 1 00:24:25.029073 containerd[1505]: time="2025-11-01T00:24:25.028899582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:25.471262 containerd[1505]: time="2025-11-01T00:24:25.471027736Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:25.472847 containerd[1505]: time="2025-11-01T00:24:25.472702337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:25.472847 containerd[1505]: time="2025-11-01T00:24:25.472794055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:25.475093 kubelet[2652]: E1101 00:24:25.474786 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:25.475093 kubelet[2652]: E1101 00:24:25.474853 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:25.475093 kubelet[2652]: E1101 00:24:25.475018 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2hlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6d9dfb6c85-btn4p_calico-system(98898523-1f05-472a-90a7-fe467ee6a22e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:25.476739 kubelet[2652]: E1101 00:24:25.476483 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:24:26.527797 systemd[1]: Started sshd@13-95.217.181.13:22-147.75.109.163:44328.service - OpenSSH per-connection server daemon (147.75.109.163:44328). Nov 1 00:24:27.624690 sshd[5694]: Accepted publickey for core from 147.75.109.163 port 44328 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:27.626277 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:27.635126 systemd-logind[1487]: New session 14 of user core. Nov 1 00:24:27.641943 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:24:28.509169 sshd[5694]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:28.515370 systemd[1]: sshd@13-95.217.181.13:22-147.75.109.163:44328.service: Deactivated successfully. Nov 1 00:24:28.519394 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:24:28.520791 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:24:28.522269 systemd-logind[1487]: Removed session 14. Nov 1 00:24:28.703277 systemd[1]: Started sshd@14-95.217.181.13:22-147.75.109.163:44330.service - OpenSSH per-connection server daemon (147.75.109.163:44330). Nov 1 00:24:29.030303 containerd[1505]: time="2025-11-01T00:24:29.030221524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:29.489461 containerd[1505]: time="2025-11-01T00:24:29.489283204Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:29.490852 containerd[1505]: time="2025-11-01T00:24:29.490772807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:29.490909 containerd[1505]: time="2025-11-01T00:24:29.490829838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:29.490985 kubelet[2652]: E1101 00:24:29.490949 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:29.491287 kubelet[2652]: E1101 00:24:29.491001 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:29.491287 kubelet[2652]: E1101 00:24:29.491130 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2dn4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-6t7nj_calico-apiserver(fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:29.492904 kubelet[2652]: E1101 00:24:29.492879 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:24:29.806498 sshd[5707]: Accepted publickey for core from 147.75.109.163 port 44330 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:29.806298 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:29.811170 systemd-logind[1487]: New session 15 of user core. Nov 1 00:24:29.817930 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:24:30.032856 containerd[1505]: time="2025-11-01T00:24:30.032606538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:30.472353 containerd[1505]: time="2025-11-01T00:24:30.472290156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:30.473632 containerd[1505]: time="2025-11-01T00:24:30.473576003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:30.473705 containerd[1505]: time="2025-11-01T00:24:30.473680777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:30.473929 kubelet[2652]: E1101 00:24:30.473859 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:30.473929 kubelet[2652]: E1101 00:24:30.473906 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:30.474183 kubelet[2652]: E1101 00:24:30.474147 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-45bx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7b55fd6955-lwt2w_calico-apiserver(5bfe0f66-8e86-4d9f-b0e9-32499fee7221): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:30.474592 containerd[1505]: time="2025-11-01T00:24:30.474574194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:30.476268 kubelet[2652]: E1101 00:24:30.476198 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:24:30.894068 containerd[1505]: time="2025-11-01T00:24:30.893559812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:30.895875 containerd[1505]: time="2025-11-01T00:24:30.895688960Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:30.896385 containerd[1505]: time="2025-11-01T00:24:30.896161077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:30.896992 kubelet[2652]: E1101 00:24:30.896314 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:30.896992 kubelet[2652]: E1101 00:24:30.896562 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:30.896992 kubelet[2652]: E1101 00:24:30.896809 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:30.900620 containerd[1505]: time="2025-11-01T00:24:30.900049222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:30.904462 sshd[5707]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:30.913074 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:24:30.914524 systemd[1]: sshd@14-95.217.181.13:22-147.75.109.163:44330.service: Deactivated successfully. Nov 1 00:24:30.918972 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:24:30.920726 systemd-logind[1487]: Removed session 15. Nov 1 00:24:31.064926 systemd[1]: Started sshd@15-95.217.181.13:22-147.75.109.163:33252.service - OpenSSH per-connection server daemon (147.75.109.163:33252). Nov 1 00:24:31.350527 containerd[1505]: time="2025-11-01T00:24:31.350463154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:31.352377 containerd[1505]: time="2025-11-01T00:24:31.352312297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:31.352496 containerd[1505]: time="2025-11-01T00:24:31.352439004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:31.352826 kubelet[2652]: E1101 00:24:31.352740 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:31.352909 kubelet[2652]: E1101 00:24:31.352843 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:31.353289 kubelet[2652]: E1101 00:24:31.353198 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4lkfc_calico-system(ae9e8348-8b23-4471-92e0-30ed8445c882): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:31.354033 containerd[1505]: time="2025-11-01T00:24:31.353996110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:31.354515 kubelet[2652]: E1101 00:24:31.354462 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:24:31.790095 containerd[1505]: time="2025-11-01T00:24:31.789718907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:31.791523 containerd[1505]: time="2025-11-01T00:24:31.791370186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:31.791745 containerd[1505]: time="2025-11-01T00:24:31.791468387Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:31.792844 kubelet[2652]: E1101 00:24:31.792798 2652 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:31.792953 kubelet[2652]: E1101 00:24:31.792859 2652 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:31.793031 kubelet[2652]: E1101 00:24:31.792981 2652 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqq5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-62wdq_calico-system(9d4fd33c-57a2-484f-b033-ef3d888b08dc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:31.794353 kubelet[2652]: E1101 00:24:31.794311 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:24:32.108466 sshd[5718]: Accepted publickey for core from 147.75.109.163 port 33252 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:32.110880 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:32.120222 systemd-logind[1487]: New session 16 of user core. Nov 1 00:24:32.124994 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:24:33.512761 sshd[5718]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:33.522916 systemd[1]: sshd@15-95.217.181.13:22-147.75.109.163:33252.service: Deactivated successfully. Nov 1 00:24:33.524400 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:24:33.526974 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:24:33.528289 systemd-logind[1487]: Removed session 16. Nov 1 00:24:33.678995 systemd[1]: Started sshd@16-95.217.181.13:22-147.75.109.163:33256.service - OpenSSH per-connection server daemon (147.75.109.163:33256). Nov 1 00:24:34.721882 sshd[5737]: Accepted publickey for core from 147.75.109.163 port 33256 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:34.724846 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:34.731715 systemd-logind[1487]: New session 17 of user core. Nov 1 00:24:34.734781 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:24:35.040796 kubelet[2652]: E1101 00:24:35.040117 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:35.740992 sshd[5737]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:35.748543 systemd[1]: sshd@16-95.217.181.13:22-147.75.109.163:33256.service: Deactivated successfully. Nov 1 00:24:35.750879 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:24:35.752070 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:24:35.753977 systemd-logind[1487]: Removed session 17. Nov 1 00:24:35.917026 systemd[1]: Started sshd@17-95.217.181.13:22-147.75.109.163:33260.service - OpenSSH per-connection server daemon (147.75.109.163:33260). Nov 1 00:24:36.929730 sshd[5769]: Accepted publickey for core from 147.75.109.163 port 33260 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:36.932256 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:36.943009 systemd-logind[1487]: New session 18 of user core. Nov 1 00:24:36.951873 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:24:37.696856 sshd[5769]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:37.703057 systemd[1]: sshd@17-95.217.181.13:22-147.75.109.163:33260.service: Deactivated successfully. Nov 1 00:24:37.706370 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:24:37.708375 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:24:37.710348 systemd-logind[1487]: Removed session 18. Nov 1 00:24:39.028378 kubelet[2652]: E1101 00:24:39.028328 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:24:42.032700 kubelet[2652]: E1101 00:24:42.031594 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:24:42.876749 systemd[1]: Started sshd@18-95.217.181.13:22-147.75.109.163:41886.service - OpenSSH per-connection server daemon (147.75.109.163:41886). Nov 1 00:24:43.029420 kubelet[2652]: E1101 00:24:43.029093 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:24:43.883573 sshd[5784]: Accepted publickey for core from 147.75.109.163 port 41886 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:43.885828 sshd[5784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:43.891054 systemd-logind[1487]: New session 19 of user core. Nov 1 00:24:43.898074 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:24:44.032754 kubelet[2652]: E1101 00:24:44.032607 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:24:44.644289 sshd[5784]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:44.649605 systemd[1]: sshd@18-95.217.181.13:22-147.75.109.163:41886.service: Deactivated successfully. Nov 1 00:24:44.650733 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:24:44.652275 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:24:44.653481 systemd-logind[1487]: Removed session 19. Nov 1 00:24:45.028719 kubelet[2652]: E1101 00:24:45.028132 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:24:46.032137 kubelet[2652]: E1101 00:24:46.032019 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:49.824978 systemd[1]: Started sshd@19-95.217.181.13:22-147.75.109.163:41902.service - OpenSSH per-connection server daemon (147.75.109.163:41902). Nov 1 00:24:50.841212 sshd[5801]: Accepted publickey for core from 147.75.109.163 port 41902 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:50.843956 sshd[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:50.852896 systemd-logind[1487]: New session 20 of user core. Nov 1 00:24:50.856861 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:24:51.030205 kubelet[2652]: E1101 00:24:51.029830 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:24:51.629441 sshd[5801]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:51.634868 systemd[1]: sshd@19-95.217.181.13:22-147.75.109.163:41902.service: Deactivated successfully. Nov 1 00:24:51.638386 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:24:51.640249 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:24:51.642470 systemd-logind[1487]: Removed session 20. Nov 1 00:24:54.031711 kubelet[2652]: E1101 00:24:54.031512 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:24:54.713047 systemd[1]: run-containerd-runc-k8s.io-2254ec1b227c42ca5fd15dbbf22099a793cefca43fa600a6978ba3b08d710e23-runc.yn5WJQ.mount: Deactivated successfully. Nov 1 00:24:56.031967 kubelet[2652]: E1101 00:24:56.030598 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:24:57.030711 kubelet[2652]: E1101 00:24:57.030165 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" Nov 1 00:24:57.031357 kubelet[2652]: E1101 00:24:57.031251 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:24:59.029179 kubelet[2652]: E1101 00:24:59.029071 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-62wdq" podUID="9d4fd33c-57a2-484f-b033-ef3d888b08dc" Nov 1 00:25:03.028835 kubelet[2652]: E1101 00:25:03.028739 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6d9dfb6c85-btn4p" podUID="98898523-1f05-472a-90a7-fe467ee6a22e" Nov 1 00:25:05.028334 kubelet[2652]: E1101 00:25:05.028267 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4lkfc" podUID="ae9e8348-8b23-4471-92e0-30ed8445c882" Nov 1 00:25:07.316081 systemd[1]: cri-containerd-135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff.scope: Deactivated successfully. Nov 1 00:25:07.316424 systemd[1]: cri-containerd-135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff.scope: Consumed 23.021s CPU time. Nov 1 00:25:07.376849 systemd[1]: cri-containerd-bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05.scope: Deactivated successfully. Nov 1 00:25:07.377213 systemd[1]: cri-containerd-bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05.scope: Consumed 4.171s CPU time, 20.4M memory peak, 0B memory swap peak. Nov 1 00:25:07.403213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff-rootfs.mount: Deactivated successfully. Nov 1 00:25:07.429720 containerd[1505]: time="2025-11-01T00:25:07.416507890Z" level=info msg="shim disconnected" id=135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff namespace=k8s.io Nov 1 00:25:07.444079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05-rootfs.mount: Deactivated successfully. Nov 1 00:25:07.455375 containerd[1505]: time="2025-11-01T00:25:07.455312189Z" level=warning msg="cleaning up after shim disconnected" id=135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff namespace=k8s.io Nov 1 00:25:07.455626 containerd[1505]: time="2025-11-01T00:25:07.455595113Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:07.472866 containerd[1505]: time="2025-11-01T00:25:07.471842719Z" level=info msg="shim disconnected" id=bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05 namespace=k8s.io Nov 1 00:25:07.472866 containerd[1505]: time="2025-11-01T00:25:07.471916931Z" level=warning msg="cleaning up after shim disconnected" id=bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05 namespace=k8s.io Nov 1 00:25:07.472866 containerd[1505]: time="2025-11-01T00:25:07.471928742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:07.536576 kubelet[2652]: I1101 00:25:07.536322 2652 status_manager.go:890] "Failed to get status for pod" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41256->10.0.0.2:2379: read: connection timed out" Nov 1 00:25:07.585595 kubelet[2652]: E1101 00:25:07.536285 2652 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41180->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{whisker-784c7f6667-sp4fm.1873ba2a605bbc0e calico-system 1635 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-784c7f6667-sp4fm,UID:6d4596f8-201b-4071-856f-d068e8d1a4cc,APIVersion:v1,ResourceVersion:899,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-a2a464dc28,},FirstTimestamp:2025-11-01 00:22:50 +0000 UTC,LastTimestamp:2025-11-01 00:24:57.030064771 +0000 UTC m=+165.169577645,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-a2a464dc28,}" Nov 1 00:25:07.631731 kubelet[2652]: E1101 00:25:07.631282 2652 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41358->10.0.0.2:2379: read: connection timed out" Nov 1 00:25:07.912056 kubelet[2652]: I1101 00:25:07.911349 2652 scope.go:117] "RemoveContainer" containerID="bc63396296b5d27844dceba6b25ba80582a03624971f51831ee19e2677c6fe05" Nov 1 00:25:07.912056 kubelet[2652]: I1101 00:25:07.911839 2652 scope.go:117] "RemoveContainer" containerID="135ac5e3003fd0dab98244efab0451121197d9fd84ba7a3be6deba3ea05392ff" Nov 1 00:25:07.925363 containerd[1505]: time="2025-11-01T00:25:07.925309061Z" level=info msg="CreateContainer within sandbox \"ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 00:25:07.926796 containerd[1505]: time="2025-11-01T00:25:07.926676350Z" level=info msg="CreateContainer within sandbox \"6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:25:08.052339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857042394.mount: Deactivated successfully. Nov 1 00:25:08.084323 kubelet[2652]: E1101 00:25:08.082123 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-6t7nj" podUID="fb9d770f-45bf-4ea7-b239-8b2dc1a69c6e" Nov 1 00:25:08.091889 containerd[1505]: time="2025-11-01T00:25:08.091851094Z" level=info msg="CreateContainer within sandbox \"ef0af0b4977dc450fe80438c546efd245d8702eff561149b200d9433354e65a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d757dbec52f1afc31a38935e1d2903df38eddd770c50cf26bb4478ff9e6c68d8\"" Nov 1 00:25:08.092679 containerd[1505]: time="2025-11-01T00:25:08.092545653Z" level=info msg="StartContainer for \"d757dbec52f1afc31a38935e1d2903df38eddd770c50cf26bb4478ff9e6c68d8\"" Nov 1 00:25:08.132853 systemd[1]: Started cri-containerd-d757dbec52f1afc31a38935e1d2903df38eddd770c50cf26bb4478ff9e6c68d8.scope - libcontainer container d757dbec52f1afc31a38935e1d2903df38eddd770c50cf26bb4478ff9e6c68d8. Nov 1 00:25:08.134828 containerd[1505]: time="2025-11-01T00:25:08.133377424Z" level=info msg="CreateContainer within sandbox \"6dc858e1f37b13d91fba2508fda06fe33f713e20abf79bfe0f11067c9051f691\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"88eea7d261eeecf69c54f6bb92085dc6725c801a64a029813b3bdba43fabe98f\"" Nov 1 00:25:08.137484 containerd[1505]: time="2025-11-01T00:25:08.137461343Z" level=info msg="StartContainer for \"88eea7d261eeecf69c54f6bb92085dc6725c801a64a029813b3bdba43fabe98f\"" Nov 1 00:25:08.180028 containerd[1505]: time="2025-11-01T00:25:08.179896707Z" level=info msg="StartContainer for \"d757dbec52f1afc31a38935e1d2903df38eddd770c50cf26bb4478ff9e6c68d8\" returns successfully" Nov 1 00:25:08.188834 systemd[1]: Started cri-containerd-88eea7d261eeecf69c54f6bb92085dc6725c801a64a029813b3bdba43fabe98f.scope - libcontainer container 88eea7d261eeecf69c54f6bb92085dc6725c801a64a029813b3bdba43fabe98f. Nov 1 00:25:08.235409 containerd[1505]: time="2025-11-01T00:25:08.235367590Z" level=info msg="StartContainer for \"88eea7d261eeecf69c54f6bb92085dc6725c801a64a029813b3bdba43fabe98f\" returns successfully" Nov 1 00:25:09.030443 kubelet[2652]: E1101 00:25:09.030278 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-784c7f6667-sp4fm" podUID="6d4596f8-201b-4071-856f-d068e8d1a4cc" Nov 1 00:25:10.029250 kubelet[2652]: E1101 00:25:10.028861 2652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7b55fd6955-lwt2w" podUID="5bfe0f66-8e86-4d9f-b0e9-32499fee7221"