Nov 8 00:29:42.871184 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:29:42.871207 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:42.871216 kernel: BIOS-provided physical RAM map: Nov 8 00:29:42.871222 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:29:42.871228 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:29:42.871233 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:29:42.871240 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 8 00:29:42.871262 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 8 00:29:42.871270 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:29:42.871276 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:29:42.871282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:29:42.871287 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:29:42.871293 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:29:42.871299 kernel: NX (Execute Disable) protection: active Nov 8 00:29:42.871307 kernel: APIC: Static calls initialized Nov 8 00:29:42.871313 kernel: SMBIOS 3.0.0 present. Nov 8 00:29:42.871320 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 8 00:29:42.871326 kernel: Hypervisor detected: KVM Nov 8 00:29:42.871332 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:29:42.871338 kernel: kvm-clock: using sched offset of 3102498389 cycles Nov 8 00:29:42.871344 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:29:42.871362 kernel: tsc: Detected 2445.404 MHz processor Nov 8 00:29:42.871369 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:29:42.871377 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:29:42.871383 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 8 00:29:42.871390 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:29:42.871396 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:29:42.871402 kernel: Using GB pages for direct mapping Nov 8 00:29:42.871408 kernel: ACPI: Early table checksum verification disabled Nov 8 00:29:42.871414 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 8 00:29:42.871421 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871427 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871435 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871441 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 8 00:29:42.871447 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871453 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871460 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871466 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:29:42.871472 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 8 00:29:42.871478 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 8 00:29:42.871489 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 8 00:29:42.871495 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 8 00:29:42.871502 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 8 00:29:42.871509 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 8 00:29:42.871515 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 8 00:29:42.871522 kernel: No NUMA configuration found Nov 8 00:29:42.871528 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 8 00:29:42.871536 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 8 00:29:42.871543 kernel: Zone ranges: Nov 8 00:29:42.871549 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:29:42.871556 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 8 00:29:42.871562 kernel: Normal empty Nov 8 00:29:42.871569 kernel: Movable zone start for each node Nov 8 00:29:42.871575 kernel: Early memory node ranges Nov 8 00:29:42.871582 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:29:42.871588 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 8 00:29:42.871596 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 8 00:29:42.871603 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:29:42.871609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:29:42.871615 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 8 00:29:42.871622 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:29:42.871628 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:29:42.871635 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:29:42.871641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:29:42.871648 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:29:42.871656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:29:42.871663 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:29:42.871669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:29:42.871676 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:29:42.871682 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:29:42.871689 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:29:42.871695 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:29:42.871702 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:29:42.871709 kernel: Booting paravirtualized kernel on KVM Nov 8 00:29:42.871715 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:29:42.871723 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:29:42.871730 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:29:42.871736 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:29:42.871743 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:29:42.871749 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 8 00:29:42.871757 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:42.871763 kernel: random: crng init done Nov 8 00:29:42.871770 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:29:42.871778 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:29:42.871784 kernel: Fallback order for Node 0: 0 Nov 8 00:29:42.871791 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 8 00:29:42.871797 kernel: Policy zone: DMA32 Nov 8 00:29:42.871804 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:29:42.871811 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 125152K reserved, 0K cma-reserved) Nov 8 00:29:42.871817 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:29:42.871824 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:29:42.871831 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:29:42.871838 kernel: Dynamic Preempt: voluntary Nov 8 00:29:42.871845 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:29:42.871852 kernel: rcu: RCU event tracing is enabled. Nov 8 00:29:42.871859 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:29:42.871866 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:29:42.871873 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:29:42.871879 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:29:42.871886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:29:42.871892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:29:42.871900 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:29:42.871907 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:29:42.871913 kernel: Console: colour VGA+ 80x25 Nov 8 00:29:42.871920 kernel: printk: console [tty0] enabled Nov 8 00:29:42.871926 kernel: printk: console [ttyS0] enabled Nov 8 00:29:42.871933 kernel: ACPI: Core revision 20230628 Nov 8 00:29:42.871939 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:29:42.871946 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:29:42.871952 kernel: x2apic enabled Nov 8 00:29:42.871960 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:29:42.871967 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:29:42.871973 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:29:42.871980 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Nov 8 00:29:42.871986 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:29:42.871993 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:29:42.871999 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:29:42.872006 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:29:42.872019 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:29:42.872026 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:29:42.872033 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 8 00:29:42.872040 kernel: active return thunk: retbleed_return_thunk Nov 8 00:29:42.872048 kernel: RETBleed: Mitigation: untrained return thunk Nov 8 00:29:42.872055 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:29:42.872062 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:29:42.872069 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:29:42.872076 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:29:42.872084 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:29:42.872091 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:29:42.872099 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 8 00:29:42.872106 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:29:42.872112 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:29:42.872119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:29:42.872126 kernel: landlock: Up and running. Nov 8 00:29:42.872133 kernel: SELinux: Initializing. Nov 8 00:29:42.872140 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:42.872148 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:29:42.872155 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 8 00:29:42.872162 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:42.872170 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:42.872177 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:29:42.872184 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:29:42.872190 kernel: ... version: 0 Nov 8 00:29:42.872197 kernel: ... bit width: 48 Nov 8 00:29:42.872205 kernel: ... generic registers: 6 Nov 8 00:29:42.872212 kernel: ... value mask: 0000ffffffffffff Nov 8 00:29:42.872219 kernel: ... max period: 00007fffffffffff Nov 8 00:29:42.872226 kernel: ... fixed-purpose events: 0 Nov 8 00:29:42.872233 kernel: ... event mask: 000000000000003f Nov 8 00:29:42.872240 kernel: signal: max sigframe size: 1776 Nov 8 00:29:42.874272 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:29:42.874288 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:29:42.874297 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:29:42.874339 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:29:42.874371 kernel: .... node #0, CPUs: #1 Nov 8 00:29:42.874386 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:29:42.874399 kernel: smpboot: Max logical packages: 1 Nov 8 00:29:42.874413 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Nov 8 00:29:42.874421 kernel: devtmpfs: initialized Nov 8 00:29:42.874428 kernel: x86/mm: Memory block size: 128MB Nov 8 00:29:42.874436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:29:42.874443 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:29:42.874450 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:29:42.874460 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:29:42.874467 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:29:42.874474 kernel: audit: type=2000 audit(1762561781.273:1): state=initialized audit_enabled=0 res=1 Nov 8 00:29:42.874481 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:29:42.874488 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:29:42.874495 kernel: cpuidle: using governor menu Nov 8 00:29:42.874502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:29:42.874509 kernel: dca service started, version 1.12.1 Nov 8 00:29:42.874516 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:29:42.874525 kernel: PCI: Using configuration type 1 for base access Nov 8 00:29:42.874532 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:29:42.874539 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:29:42.874546 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:29:42.874553 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:29:42.874560 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:29:42.874567 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:29:42.874574 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:29:42.874581 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:29:42.874589 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:29:42.874596 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:29:42.874603 kernel: ACPI: Interpreter enabled Nov 8 00:29:42.874610 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:29:42.874617 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:29:42.874624 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:29:42.874631 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:29:42.874638 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:29:42.874645 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:29:42.874779 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:29:42.874868 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:29:42.874944 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:29:42.874955 kernel: PCI host bridge to bus 0000:00 Nov 8 00:29:42.875035 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:29:42.875105 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:29:42.875177 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:29:42.875243 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 8 00:29:42.877115 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:29:42.877187 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 8 00:29:42.877288 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:29:42.877402 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:29:42.877492 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 8 00:29:42.877577 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 8 00:29:42.877654 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 8 00:29:42.877730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 8 00:29:42.877808 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 8 00:29:42.877885 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:29:42.878069 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.878178 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 8 00:29:42.880322 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.880439 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 8 00:29:42.880527 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.880606 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 8 00:29:42.880689 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.880772 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 8 00:29:42.880854 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.880932 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 8 00:29:42.881013 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.881088 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 8 00:29:42.881170 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.881297 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 8 00:29:42.881410 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.881487 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 8 00:29:42.881569 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:29:42.881642 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 8 00:29:42.881709 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:29:42.881769 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:29:42.881844 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:29:42.881906 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 8 00:29:42.881965 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 8 00:29:42.882030 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:29:42.882091 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:29:42.882161 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:29:42.882230 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 8 00:29:42.884377 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 8 00:29:42.884449 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 8 00:29:42.884514 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:29:42.884575 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:42.884636 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:42.884706 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:29:42.884776 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 8 00:29:42.884840 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:29:42.884901 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:42.884962 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:42.885032 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:29:42.885097 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 8 00:29:42.885165 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 8 00:29:42.885227 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:29:42.885341 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:42.885421 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:42.885492 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:29:42.885557 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 8 00:29:42.885621 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:29:42.885681 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:42.885748 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:42.885820 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:29:42.885885 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 8 00:29:42.885949 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 8 00:29:42.886011 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:29:42.886072 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:42.886133 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:42.886208 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:29:42.888311 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 8 00:29:42.888400 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 8 00:29:42.888464 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:29:42.888536 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:29:42.888625 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:42.888635 kernel: acpiphp: Slot [0] registered Nov 8 00:29:42.888709 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:29:42.888779 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 8 00:29:42.888841 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 8 00:29:42.888904 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 8 00:29:42.888966 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:29:42.889029 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:42.889091 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:42.889099 kernel: acpiphp: Slot [0-2] registered Nov 8 00:29:42.889160 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:29:42.889224 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:42.889304 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:42.889314 kernel: acpiphp: Slot [0-3] registered Nov 8 00:29:42.889388 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:29:42.889450 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:42.889510 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:42.889518 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:29:42.889524 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:29:42.889534 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:29:42.889540 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:29:42.889546 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:29:42.889551 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:29:42.889557 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:29:42.889563 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:29:42.889569 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:29:42.889574 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:29:42.889580 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:29:42.889587 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:29:42.889593 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:29:42.889599 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:29:42.889604 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:29:42.889610 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:29:42.889616 kernel: iommu: Default domain type: Translated Nov 8 00:29:42.889622 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:29:42.889627 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:29:42.889633 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:29:42.889640 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:29:42.889646 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 8 00:29:42.889709 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:29:42.889770 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:29:42.889830 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:29:42.889838 kernel: vgaarb: loaded Nov 8 00:29:42.889844 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:29:42.889850 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:29:42.889856 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:29:42.889864 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:29:42.889870 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:29:42.889876 kernel: pnp: PnP ACPI init Nov 8 00:29:42.889943 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:29:42.889953 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:29:42.889959 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:29:42.889964 kernel: NET: Registered PF_INET protocol family Nov 8 00:29:42.889970 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:29:42.889978 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:29:42.889984 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:29:42.889990 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:29:42.889996 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:29:42.890002 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:29:42.890007 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:42.890013 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:29:42.890023 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:29:42.890036 kernel: NET: Registered PF_XDP protocol family Nov 8 00:29:42.890152 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:29:42.890221 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:29:42.891341 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:29:42.891427 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:29:42.891488 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:29:42.891548 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:29:42.891609 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:29:42.891675 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:42.891735 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:42.891795 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:29:42.891856 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:42.891917 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:42.891977 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:29:42.892127 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:42.892195 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:42.893316 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:29:42.893409 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:42.893522 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:42.893587 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:29:42.893649 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:42.893728 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:42.893798 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:29:42.893872 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:29:42.893937 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:42.894035 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:29:42.894098 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 8 00:29:42.894163 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:42.894224 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:42.896335 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:29:42.896419 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 8 00:29:42.896481 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:42.896543 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:42.896610 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:29:42.896695 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 8 00:29:42.896775 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:42.896843 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:42.896904 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:29:42.896959 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:29:42.897012 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:29:42.897065 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 8 00:29:42.897119 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:29:42.897172 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 8 00:29:42.897239 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 8 00:29:42.897335 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 8 00:29:42.897411 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 8 00:29:42.897469 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:29:42.897531 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 8 00:29:42.897587 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:29:42.897654 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 8 00:29:42.897710 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:29:42.897772 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 8 00:29:42.897830 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:29:42.897896 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 8 00:29:42.897953 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:29:42.898018 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 8 00:29:42.898074 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 8 00:29:42.898129 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:29:42.898190 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 8 00:29:42.900310 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 8 00:29:42.900609 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:29:42.900711 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 8 00:29:42.900807 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 8 00:29:42.900869 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:29:42.900879 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:29:42.900886 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:29:42.900892 kernel: Initialise system trusted keyrings Nov 8 00:29:42.900899 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:29:42.900905 kernel: Key type asymmetric registered Nov 8 00:29:42.900911 kernel: Asymmetric key parser 'x509' registered Nov 8 00:29:42.900920 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:29:42.900926 kernel: io scheduler mq-deadline registered Nov 8 00:29:42.900932 kernel: io scheduler kyber registered Nov 8 00:29:42.900938 kernel: io scheduler bfq registered Nov 8 00:29:42.901006 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 8 00:29:42.901071 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 8 00:29:42.901135 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 8 00:29:42.901198 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 8 00:29:42.901546 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 8 00:29:42.901622 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 8 00:29:42.901689 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 8 00:29:42.901753 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 8 00:29:42.901816 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 8 00:29:42.901878 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 8 00:29:42.901942 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 8 00:29:42.902004 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 8 00:29:42.902067 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 8 00:29:42.902135 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 8 00:29:42.902198 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 8 00:29:42.902325 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 8 00:29:42.902336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:29:42.902434 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 8 00:29:42.902499 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 8 00:29:42.902508 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:29:42.902515 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 8 00:29:42.902521 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:29:42.902531 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:29:42.902537 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:29:42.902544 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:29:42.902549 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:29:42.902556 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:29:42.902622 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:29:42.902681 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:29:42.902738 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:29:42 UTC (1762561782) Nov 8 00:29:42.902798 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:29:42.902807 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:29:42.902813 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:29:42.902819 kernel: Segment Routing with IPv6 Nov 8 00:29:42.902825 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:29:42.902831 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:29:42.902837 kernel: Key type dns_resolver registered Nov 8 00:29:42.902843 kernel: IPI shorthand broadcast: enabled Nov 8 00:29:42.902851 kernel: sched_clock: Marking stable (1131015844, 135832262)->(1275097840, -8249734) Nov 8 00:29:42.902857 kernel: registered taskstats version 1 Nov 8 00:29:42.902864 kernel: Loading compiled-in X.509 certificates Nov 8 00:29:42.902870 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:29:42.902876 kernel: Key type .fscrypt registered Nov 8 00:29:42.902881 kernel: Key type fscrypt-provisioning registered Nov 8 00:29:42.902887 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:29:42.902893 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:29:42.902900 kernel: ima: No architecture policies found Nov 8 00:29:42.902907 kernel: clk: Disabling unused clocks Nov 8 00:29:42.902913 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:29:42.902919 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:29:42.902925 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:29:42.902931 kernel: Run /init as init process Nov 8 00:29:42.902937 kernel: with arguments: Nov 8 00:29:42.902943 kernel: /init Nov 8 00:29:42.902949 kernel: with environment: Nov 8 00:29:42.902955 kernel: HOME=/ Nov 8 00:29:42.902961 kernel: TERM=linux Nov 8 00:29:42.902970 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:42.902978 systemd[1]: Detected virtualization kvm. Nov 8 00:29:42.902985 systemd[1]: Detected architecture x86-64. Nov 8 00:29:42.902991 systemd[1]: Running in initrd. Nov 8 00:29:42.902997 systemd[1]: No hostname configured, using default hostname. Nov 8 00:29:42.903003 systemd[1]: Hostname set to . Nov 8 00:29:42.903010 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:42.903017 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:29:42.903024 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:42.903054 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:42.903061 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:29:42.903067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:42.903074 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:29:42.903080 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:29:42.903090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:29:42.903097 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:29:42.903105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:42.903111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:42.903118 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:42.903124 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:42.903130 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:42.903137 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:42.903146 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:42.903153 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:42.903194 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:29:42.903209 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:29:42.903221 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:42.903231 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:42.903237 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:42.903244 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:42.903276 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:29:42.903282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:42.903289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:29:42.903295 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:29:42.903301 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:42.903308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:42.903314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:42.903321 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:42.903327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:42.903367 systemd-journald[187]: Collecting audit messages is disabled. Nov 8 00:29:42.903385 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:29:42.903395 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:29:42.903402 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:29:42.903409 systemd-journald[187]: Journal started Nov 8 00:29:42.903425 systemd-journald[187]: Runtime Journal (/run/log/journal/83dfb6b875ec4507973ae1abac696be1) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:29:42.878204 systemd-modules-load[188]: Inserted module 'overlay' Nov 8 00:29:42.934416 kernel: Bridge firewalling registered Nov 8 00:29:42.934433 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:42.903925 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 8 00:29:42.935060 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:42.935872 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:42.936866 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:29:42.942364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:42.944365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:42.945213 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:42.949144 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:42.960977 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:42.964404 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:29:42.969659 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:42.971214 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:42.972479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:42.975913 dracut-cmdline[216]: dracut-dracut-053 Nov 8 00:29:42.979777 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:29:42.981375 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:43.000182 systemd-resolved[228]: Positive Trust Anchors: Nov 8 00:29:43.000735 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:43.000762 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:43.008642 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 8 00:29:43.009339 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:43.010036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:43.023274 kernel: SCSI subsystem initialized Nov 8 00:29:43.031282 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:29:43.040279 kernel: iscsi: registered transport (tcp) Nov 8 00:29:43.056296 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:29:43.056368 kernel: QLogic iSCSI HBA Driver Nov 8 00:29:43.081315 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:43.086403 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:29:43.104550 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:29:43.104611 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:29:43.105272 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:29:43.141284 kernel: raid6: avx2x4 gen() 35475 MB/s Nov 8 00:29:43.158282 kernel: raid6: avx2x2 gen() 32201 MB/s Nov 8 00:29:43.175417 kernel: raid6: avx2x1 gen() 26764 MB/s Nov 8 00:29:43.175474 kernel: raid6: using algorithm avx2x4 gen() 35475 MB/s Nov 8 00:29:43.193544 kernel: raid6: .... xor() 4606 MB/s, rmw enabled Nov 8 00:29:43.193598 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:29:43.211288 kernel: xor: automatically using best checksumming function avx Nov 8 00:29:43.327286 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:29:43.334964 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:43.343420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:43.355126 systemd-udevd[404]: Using default interface naming scheme 'v255'. Nov 8 00:29:43.359089 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:43.366418 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:29:43.374614 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Nov 8 00:29:43.393329 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:43.399391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:43.433403 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:43.441404 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:29:43.449817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:43.450955 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:43.452083 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:43.453147 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:43.459394 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:29:43.469716 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:43.505293 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:29:43.507276 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:29:43.517312 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:29:43.553410 kernel: ACPI: bus type USB registered Nov 8 00:29:43.553473 kernel: usbcore: registered new interface driver usbfs Nov 8 00:29:43.555415 kernel: usbcore: registered new interface driver hub Nov 8 00:29:43.555450 kernel: usbcore: registered new device driver usb Nov 8 00:29:43.562378 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:29:43.562409 kernel: AES CTR mode by8 optimization enabled Nov 8 00:29:43.560514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:43.560611 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:43.561206 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:43.561701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:43.561791 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:43.562890 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:43.571541 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:43.601922 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:29:43.602096 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:29:43.602222 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:29:43.602372 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:29:43.602465 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:29:43.602547 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:29:43.604331 kernel: hub 1-0:1.0: USB hub found Nov 8 00:29:43.610172 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:29:43.610290 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:29:43.610446 kernel: hub 2-0:1.0: USB hub found Nov 8 00:29:43.610545 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:29:43.621267 kernel: libata version 3.00 loaded. Nov 8 00:29:43.626512 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:29:43.626666 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:29:43.626789 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:29:43.626875 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:29:43.626955 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:29:43.630281 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:29:43.630306 kernel: GPT:17805311 != 80003071 Nov 8 00:29:43.630315 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:29:43.630323 kernel: GPT:17805311 != 80003071 Nov 8 00:29:43.630330 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:29:43.630337 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:43.630345 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:29:43.633626 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:29:43.633746 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:29:43.633762 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:29:43.633846 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:29:43.638265 kernel: scsi host1: ahci Nov 8 00:29:43.639275 kernel: scsi host2: ahci Nov 8 00:29:43.639400 kernel: scsi host3: ahci Nov 8 00:29:43.640261 kernel: scsi host4: ahci Nov 8 00:29:43.640443 kernel: scsi host5: ahci Nov 8 00:29:43.641264 kernel: scsi host6: ahci Nov 8 00:29:43.641392 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Nov 8 00:29:43.641402 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Nov 8 00:29:43.641410 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Nov 8 00:29:43.641417 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Nov 8 00:29:43.641424 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Nov 8 00:29:43.641431 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Nov 8 00:29:43.667280 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (452) Nov 8 00:29:43.670280 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (468) Nov 8 00:29:43.674491 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:29:43.718477 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:29:43.719177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:43.723473 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:29:43.723993 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:29:43.729006 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:29:43.740375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:29:43.746413 disk-uuid[564]: Primary Header is updated. Nov 8 00:29:43.746413 disk-uuid[564]: Secondary Entries is updated. Nov 8 00:29:43.746413 disk-uuid[564]: Secondary Header is updated. Nov 8 00:29:43.746419 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:29:43.750546 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:43.754283 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:43.760285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:43.760445 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:43.844406 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:29:43.957277 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:43.957390 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:43.957411 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:43.964178 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 8 00:29:43.964235 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:43.964277 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 8 00:29:43.968381 kernel: ata1.00: applying bridge limits Nov 8 00:29:43.971393 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:29:43.972562 kernel: ata1.00: configured for UDMA/100 Nov 8 00:29:43.979301 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:29:43.993304 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:29:44.003277 kernel: usbcore: registered new interface driver usbhid Nov 8 00:29:44.003316 kernel: usbhid: USB HID core driver Nov 8 00:29:44.010280 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 8 00:29:44.014277 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:29:44.040569 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 8 00:29:44.040972 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:29:44.053473 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:29:44.767272 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:29:44.768702 disk-uuid[565]: The operation has completed successfully. Nov 8 00:29:44.829444 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:29:44.829550 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:29:44.836603 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:29:44.839752 sh[597]: Success Nov 8 00:29:44.859299 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:29:44.918338 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:29:44.935442 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:29:44.937928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:29:44.963317 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:29:44.963407 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:44.966741 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:29:44.966792 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:29:44.968286 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:29:44.980291 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:29:44.981970 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:29:44.983897 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:29:44.993557 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:29:44.997589 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:29:45.016908 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:45.016962 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:45.019847 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:45.027458 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:45.027521 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:45.038513 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:29:45.041317 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:45.045187 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:29:45.052670 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:29:45.124298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:45.132423 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:45.139080 ignition[715]: Ignition 2.19.0 Nov 8 00:29:45.139092 ignition[715]: Stage: fetch-offline Nov 8 00:29:45.139132 ignition[715]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:45.139143 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:45.140038 ignition[715]: parsed url from cmdline: "" Nov 8 00:29:45.140043 ignition[715]: no config URL provided Nov 8 00:29:45.140051 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:29:45.140063 ignition[715]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:29:45.140074 ignition[715]: failed to fetch config: resource requires networking Nov 8 00:29:45.146846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:45.142137 ignition[715]: Ignition finished successfully Nov 8 00:29:45.148725 systemd-networkd[782]: lo: Link UP Nov 8 00:29:45.148728 systemd-networkd[782]: lo: Gained carrier Nov 8 00:29:45.150288 systemd-networkd[782]: Enumeration completed Nov 8 00:29:45.150404 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:45.150792 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:45.150794 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:45.151446 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:45.151449 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:45.152140 systemd-networkd[782]: eth0: Link UP Nov 8 00:29:45.152143 systemd-networkd[782]: eth0: Gained carrier Nov 8 00:29:45.152386 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:45.152810 systemd[1]: Reached target network.target - Network. Nov 8 00:29:45.155443 systemd-networkd[782]: eth1: Link UP Nov 8 00:29:45.155446 systemd-networkd[782]: eth1: Gained carrier Nov 8 00:29:45.155452 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:45.162408 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:29:45.175703 ignition[786]: Ignition 2.19.0 Nov 8 00:29:45.175715 ignition[786]: Stage: fetch Nov 8 00:29:45.175942 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:45.175964 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:45.176073 ignition[786]: parsed url from cmdline: "" Nov 8 00:29:45.176077 ignition[786]: no config URL provided Nov 8 00:29:45.176083 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:29:45.176092 ignition[786]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:29:45.176114 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:29:45.176300 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:29:45.204316 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:29:45.218324 systemd-networkd[782]: eth0: DHCPv4 address 157.180.31.220/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:29:45.376489 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:29:45.385722 ignition[786]: GET result: OK Nov 8 00:29:45.385794 ignition[786]: parsing config with SHA512: ccb9771a820e5f88a4f5477cc03ce4fece9f76d3191bd9c2e30706e95aee7e4ca4b8dd158c004bee2e12cf1198fd45e6bb4341a55f365f1c400e22c9a84c1c41 Nov 8 00:29:45.389526 unknown[786]: fetched base config from "system" Nov 8 00:29:45.389543 unknown[786]: fetched base config from "system" Nov 8 00:29:45.389927 ignition[786]: fetch: fetch complete Nov 8 00:29:45.389553 unknown[786]: fetched user config from "hetzner" Nov 8 00:29:45.389933 ignition[786]: fetch: fetch passed Nov 8 00:29:45.389986 ignition[786]: Ignition finished successfully Nov 8 00:29:45.394004 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:29:45.401475 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:29:45.424424 ignition[793]: Ignition 2.19.0 Nov 8 00:29:45.424453 ignition[793]: Stage: kargs Nov 8 00:29:45.424826 ignition[793]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:45.424850 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:45.428057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:29:45.426389 ignition[793]: kargs: kargs passed Nov 8 00:29:45.426444 ignition[793]: Ignition finished successfully Nov 8 00:29:45.435423 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:29:45.450438 ignition[799]: Ignition 2.19.0 Nov 8 00:29:45.450451 ignition[799]: Stage: disks Nov 8 00:29:45.454783 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:29:45.450664 ignition[799]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:45.459133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:45.450678 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:45.461572 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:29:45.453855 ignition[799]: disks: disks passed Nov 8 00:29:45.463137 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:45.453898 ignition[799]: Ignition finished successfully Nov 8 00:29:45.464677 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:45.466276 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:45.482457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:29:45.499474 systemd-fsck[808]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:29:45.502577 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:29:45.509389 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:29:45.585284 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:29:45.585952 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:29:45.586809 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:45.606458 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:45.609451 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:29:45.613492 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:29:45.616678 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:29:45.616725 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:45.621312 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:29:45.650467 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (816) Nov 8 00:29:45.650493 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:45.650502 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:45.650510 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:45.650518 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:45.650525 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:45.648386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:45.662662 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:29:45.696888 coreos-metadata[818]: Nov 08 00:29:45.696 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:29:45.698299 coreos-metadata[818]: Nov 08 00:29:45.697 INFO Fetch successful Nov 8 00:29:45.700390 coreos-metadata[818]: Nov 08 00:29:45.700 INFO wrote hostname ci-4081-3-6-n-6ee8ddef06 to /sysroot/etc/hostname Nov 8 00:29:45.703022 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:29:45.719406 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:29:45.723528 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:29:45.727227 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:29:45.731812 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:29:45.797960 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:45.803351 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:29:45.806142 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:29:45.812272 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:45.830128 ignition[933]: INFO : Ignition 2.19.0 Nov 8 00:29:45.832011 ignition[933]: INFO : Stage: mount Nov 8 00:29:45.832011 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:45.832011 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:45.834407 ignition[933]: INFO : mount: mount passed Nov 8 00:29:45.834407 ignition[933]: INFO : Ignition finished successfully Nov 8 00:29:45.832696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:29:45.833932 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:29:45.841355 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:29:45.962115 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:29:45.968629 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:29:46.001339 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (945) Nov 8 00:29:46.007104 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:29:46.007173 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:29:46.011601 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:29:46.020910 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:29:46.020974 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:29:46.025199 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:29:46.056752 ignition[962]: INFO : Ignition 2.19.0 Nov 8 00:29:46.056752 ignition[962]: INFO : Stage: files Nov 8 00:29:46.059430 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:46.059430 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:46.059430 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:29:46.065003 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:29:46.065003 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:29:46.068479 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:29:46.070391 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:29:46.072232 unknown[962]: wrote ssh authorized keys file for user: core Nov 8 00:29:46.073879 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:29:46.078227 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:29:46.078227 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:29:46.258612 systemd-networkd[782]: eth1: Gained IPv6LL Nov 8 00:29:46.259086 systemd-networkd[782]: eth0: Gained IPv6LL Nov 8 00:29:46.309649 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:29:46.620117 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:29:46.620117 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:29:46.625366 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:29:46.959018 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:29:47.219534 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:29:47.219534 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:29:47.222550 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:47.222550 ignition[962]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:29:47.222550 ignition[962]: INFO : files: files passed Nov 8 00:29:47.222550 ignition[962]: INFO : Ignition finished successfully Nov 8 00:29:47.222715 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:29:47.231667 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:29:47.234632 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:29:47.235551 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:29:47.235630 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:29:47.243115 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:47.243115 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:47.245295 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:29:47.247310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:47.248243 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:29:47.254410 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:29:47.270785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:29:47.270874 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:29:47.272159 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:29:47.273242 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:29:47.274583 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:29:47.280432 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:29:47.290520 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:47.297343 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:29:47.304761 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:47.305441 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:47.306083 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:29:47.306745 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:29:47.306839 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:29:47.308330 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:29:47.309046 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:29:47.309969 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:29:47.310966 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:29:47.312093 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:29:47.313107 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:29:47.314067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:29:47.315225 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:29:47.316333 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:29:47.317413 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:29:47.318399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:29:47.318496 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:29:47.319982 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:47.320781 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:47.321908 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:29:47.322277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:47.323435 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:29:47.323530 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:29:47.324986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:29:47.325090 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:29:47.325855 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:29:47.325988 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:29:47.326995 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:29:47.327122 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:29:47.336744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:29:47.339443 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:29:47.339873 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:29:47.339998 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:47.341693 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:29:47.342501 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:29:47.350167 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:29:47.350237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:29:47.354760 ignition[1014]: INFO : Ignition 2.19.0 Nov 8 00:29:47.354760 ignition[1014]: INFO : Stage: umount Nov 8 00:29:47.354760 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:29:47.354760 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:29:47.354760 ignition[1014]: INFO : umount: umount passed Nov 8 00:29:47.354760 ignition[1014]: INFO : Ignition finished successfully Nov 8 00:29:47.356206 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:29:47.356306 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:29:47.356835 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:29:47.356869 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:29:47.358611 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:29:47.358644 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:29:47.359617 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:29:47.359652 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:29:47.365761 systemd[1]: Stopped target network.target - Network. Nov 8 00:29:47.367848 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:29:47.367894 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:29:47.368869 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:29:47.369783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:29:47.371466 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:47.372162 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:29:47.373108 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:29:47.374037 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:29:47.374066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:29:47.375008 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:29:47.375034 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:29:47.376034 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:29:47.376068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:29:47.376920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:29:47.376954 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:29:47.378140 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:29:47.379322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:29:47.381556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:29:47.381949 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:29:47.382017 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:29:47.383202 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:29:47.383322 systemd-networkd[782]: eth1: DHCPv6 lease lost Nov 8 00:29:47.383607 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:29:47.387310 systemd-networkd[782]: eth0: DHCPv6 lease lost Nov 8 00:29:47.388153 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:29:47.388280 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:29:47.389562 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:29:47.390332 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:29:47.391994 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:29:47.392033 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:47.398383 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:29:47.399009 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:29:47.399049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:29:47.399556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:29:47.399588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:47.400052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:29:47.400081 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:47.401156 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:29:47.401187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:47.402552 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:47.409882 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:29:47.409955 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:29:47.415518 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:29:47.415625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:47.416832 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:29:47.416879 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:47.417729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:29:47.417761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:47.418692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:29:47.418726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:29:47.420138 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:29:47.420169 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:29:47.421129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:29:47.421160 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:29:47.431391 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:29:47.431880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:29:47.431919 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:47.435300 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:29:47.435342 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:29:47.435921 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:29:47.435956 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:47.437129 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:47.437162 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:47.438850 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:29:47.438913 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:29:47.439744 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:29:47.445362 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:29:47.451429 systemd[1]: Switching root. Nov 8 00:29:47.480281 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 8 00:29:47.480334 systemd-journald[187]: Journal stopped Nov 8 00:29:48.281615 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:29:48.281662 kernel: SELinux: policy capability open_perms=1 Nov 8 00:29:48.281675 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:29:48.281693 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:29:48.281703 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:29:48.281710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:29:48.281717 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:29:48.281724 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:29:48.281731 kernel: audit: type=1403 audit(1762561787.626:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:29:48.281740 systemd[1]: Successfully loaded SELinux policy in 40.635ms. Nov 8 00:29:48.281756 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.299ms. Nov 8 00:29:48.281764 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:29:48.281774 systemd[1]: Detected virtualization kvm. Nov 8 00:29:48.281784 systemd[1]: Detected architecture x86-64. Nov 8 00:29:48.281792 systemd[1]: Detected first boot. Nov 8 00:29:48.281800 systemd[1]: Hostname set to . Nov 8 00:29:48.281807 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:29:48.281815 zram_generator::config[1056]: No configuration found. Nov 8 00:29:48.281826 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:29:48.281834 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:29:48.281843 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:29:48.281851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:29:48.281859 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:29:48.281867 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:29:48.281876 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:29:48.281885 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:29:48.281893 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:29:48.281901 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:29:48.281911 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:29:48.281918 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:29:48.281926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:29:48.281934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:29:48.281942 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:29:48.281952 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:29:48.281961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:29:48.281968 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:29:48.281976 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:29:48.281988 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:29:48.281996 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:29:48.282004 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:29:48.282012 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:29:48.282020 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:29:48.282028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:29:48.282037 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:29:48.282045 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:29:48.282053 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:29:48.282061 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:29:48.282069 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:29:48.282078 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:29:48.282086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:29:48.282094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:29:48.282101 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:29:48.282112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:29:48.282121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:29:48.282129 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:29:48.282137 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:48.282144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:29:48.282153 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:29:48.282163 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:29:48.282173 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:29:48.282181 systemd[1]: Reached target machines.target - Containers. Nov 8 00:29:48.282189 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:29:48.282197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:48.282205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:29:48.282213 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:29:48.282221 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:48.282231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:48.282240 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:48.282262 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:29:48.282272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:48.282280 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:29:48.282288 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:29:48.282295 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:29:48.282303 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:29:48.282311 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:29:48.282321 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:29:48.282329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:29:48.282338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:29:48.282346 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:29:48.282354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:29:48.282362 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:29:48.282370 systemd[1]: Stopped verity-setup.service. Nov 8 00:29:48.282388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:48.282397 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:29:48.282406 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:29:48.282414 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:29:48.282422 kernel: loop: module loaded Nov 8 00:29:48.282432 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:29:48.282440 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:29:48.282462 systemd-journald[1143]: Collecting audit messages is disabled. Nov 8 00:29:48.282480 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:29:48.282491 systemd-journald[1143]: Journal started Nov 8 00:29:48.282507 systemd-journald[1143]: Runtime Journal (/run/log/journal/83dfb6b875ec4507973ae1abac696be1) is 4.8M, max 38.4M, 33.6M free. Nov 8 00:29:48.037007 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:29:48.054363 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:29:48.054901 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:29:48.284444 kernel: fuse: init (API version 7.39) Nov 8 00:29:48.293270 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:29:48.289237 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:29:48.290962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:29:48.291713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:29:48.291808 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:29:48.292523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:48.292611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:48.293213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:48.293569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:48.294240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:29:48.294488 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:29:48.295682 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:48.295776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:48.296537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:29:48.297472 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:29:48.299003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:29:48.311088 kernel: ACPI: bus type drm_connector registered Nov 8 00:29:48.308781 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:48.308885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:48.309812 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:29:48.315060 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:29:48.318288 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:29:48.318790 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:29:48.318813 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:29:48.320040 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:29:48.329637 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:29:48.333325 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:29:48.333845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:48.335239 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:29:48.336373 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:29:48.336866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:48.340356 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:29:48.340889 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:48.341697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:29:48.344363 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:29:48.347037 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:29:48.348525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:29:48.350339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:29:48.350973 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:29:48.368475 systemd-journald[1143]: Time spent on flushing to /var/log/journal/83dfb6b875ec4507973ae1abac696be1 is 51.251ms for 1132 entries. Nov 8 00:29:48.368475 systemd-journald[1143]: System Journal (/var/log/journal/83dfb6b875ec4507973ae1abac696be1) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:29:48.439554 systemd-journald[1143]: Received client request to flush runtime journal. Nov 8 00:29:48.439584 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:29:48.439602 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:29:48.439612 kernel: loop1: detected capacity change from 0 to 219144 Nov 8 00:29:48.374661 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:29:48.375272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:29:48.382645 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:29:48.409740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:29:48.414925 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 8 00:29:48.414936 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Nov 8 00:29:48.415487 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:29:48.416594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:29:48.424268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:29:48.438357 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:29:48.442405 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:29:48.444342 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:29:48.453504 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:29:48.454015 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:29:48.484727 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:29:48.491667 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:29:48.490397 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:29:48.513566 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Nov 8 00:29:48.513581 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Nov 8 00:29:48.519397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:29:48.530275 kernel: loop3: detected capacity change from 0 to 8 Nov 8 00:29:48.559281 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:29:48.578280 kernel: loop5: detected capacity change from 0 to 219144 Nov 8 00:29:48.598287 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:29:48.615322 kernel: loop7: detected capacity change from 0 to 8 Nov 8 00:29:48.618826 (sd-merge)[1204]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:29:48.619667 (sd-merge)[1204]: Merged extensions into '/usr'. Nov 8 00:29:48.626434 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:29:48.626446 systemd[1]: Reloading... Nov 8 00:29:48.673304 zram_generator::config[1227]: No configuration found. Nov 8 00:29:48.821274 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:29:48.820990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:48.859904 systemd[1]: Reloading finished in 233 ms. Nov 8 00:29:48.881401 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:29:48.882941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:29:48.893434 systemd[1]: Starting ensure-sysext.service... Nov 8 00:29:48.894981 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:29:48.900627 systemd[1]: Reloading requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:29:48.900636 systemd[1]: Reloading... Nov 8 00:29:48.924195 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:29:48.925007 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:29:48.926124 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:29:48.926582 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 8 00:29:48.927447 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Nov 8 00:29:48.933673 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:48.933681 systemd-tmpfiles[1274]: Skipping /boot Nov 8 00:29:48.944069 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:29:48.944144 systemd-tmpfiles[1274]: Skipping /boot Nov 8 00:29:48.957399 zram_generator::config[1300]: No configuration found. Nov 8 00:29:49.046339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:29:49.083846 systemd[1]: Reloading finished in 182 ms. Nov 8 00:29:49.098566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:29:49.102583 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:29:49.110433 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:29:49.113421 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:29:49.115365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:29:49.119237 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:29:49.121608 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:29:49.124422 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:29:49.131241 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:29:49.134416 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.134541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:49.141785 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:49.143340 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:49.148850 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:49.150937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:49.151054 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.155139 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.155558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:49.155712 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:49.155796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.170826 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:29:49.172093 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Nov 8 00:29:49.172188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:49.172331 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:49.178945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:49.179050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:49.181363 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:29:49.182674 systemd[1]: Finished ensure-sysext.service. Nov 8 00:29:49.186896 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.187437 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:49.189072 augenrules[1375]: No rules Nov 8 00:29:49.193366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:49.195958 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:29:49.197316 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:49.197359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:49.204447 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:29:49.208350 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:29:49.209087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.209487 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:29:49.210187 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:49.211523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:49.212177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:49.212290 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:49.212906 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:29:49.212994 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:29:49.214795 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:29:49.222042 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:29:49.230631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:29:49.232219 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:49.248007 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:29:49.249530 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:29:49.255288 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:29:49.298358 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:29:49.347840 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:29:49.348338 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.348550 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:29:49.356410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:29:49.359214 systemd-networkd[1398]: lo: Link UP Nov 8 00:29:49.365418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:29:49.368779 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:29:49.370141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:29:49.370171 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:29:49.370184 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:29:49.370890 systemd-networkd[1398]: lo: Gained carrier Nov 8 00:29:49.373768 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:29:49.371809 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:29:49.371932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:29:49.378986 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:29:49.378959 systemd-networkd[1398]: Enumeration completed Nov 8 00:29:49.379755 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:29:49.383324 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:29:49.385062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:29:49.385745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:29:49.386135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:29:49.391219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:29:49.391898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:29:49.395668 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.395674 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:49.398705 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:29:49.400570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:29:49.400600 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:29:49.401350 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.401356 systemd-networkd[1398]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:29:49.402021 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.402093 systemd-networkd[1398]: eth0: Link UP Nov 8 00:29:49.402132 systemd-networkd[1398]: eth0: Gained carrier Nov 8 00:29:49.402189 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.404945 systemd-resolved[1352]: Positive Trust Anchors: Nov 8 00:29:49.404956 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:29:49.404982 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:29:49.406226 systemd-networkd[1398]: eth1: Link UP Nov 8 00:29:49.406231 systemd-networkd[1398]: eth1: Gained carrier Nov 8 00:29:49.406241 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.414648 systemd-resolved[1352]: Using system hostname 'ci-4081-3-6-n-6ee8ddef06'. Nov 8 00:29:49.416862 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:29:49.418777 systemd[1]: Reached target network.target - Network. Nov 8 00:29:49.420537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:29:49.433265 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:29:49.436160 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:29:49.436352 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:29:49.436492 systemd-networkd[1398]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:29:49.437802 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:29:49.439909 systemd-networkd[1398]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:29:49.449304 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:29:49.451194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:49.452518 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:29:49.461408 systemd-networkd[1398]: eth0: DHCPv4 address 157.180.31.220/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:29:49.461667 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:29:49.462535 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:29:49.472514 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:29:49.498297 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1405) Nov 8 00:29:49.521626 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:29:49.528269 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 8 00:29:49.528460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:29:49.530289 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 8 00:29:49.568996 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:29:49.569076 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:29:49.569090 kernel: [drm] features: -context_init Nov 8 00:29:49.572301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:49.575378 kernel: [drm] number of scanouts: 1 Nov 8 00:29:49.575427 kernel: [drm] number of cap sets: 0 Nov 8 00:29:49.575474 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:29:49.576915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:49.579511 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:29:49.577085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:49.577531 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:49.586971 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 8 00:29:49.587032 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:29:49.589066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:49.595271 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:29:49.603147 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:29:49.603330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:49.610371 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:29:49.632415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:29:49.643109 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:29:49.647446 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:29:49.658568 lvm[1462]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:49.679905 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:29:49.680620 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:29:49.680726 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:29:49.680908 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:29:49.681001 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:29:49.681226 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:29:49.682483 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:29:49.682572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:29:49.682629 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:29:49.682650 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:29:49.682691 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:29:49.687789 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:29:49.689663 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:29:49.694778 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:29:49.695920 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:29:49.696478 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:29:49.696611 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:29:49.696677 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:29:49.697181 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:49.697213 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:29:49.699359 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:29:49.706377 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:29:49.706419 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:29:49.710450 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:29:49.717860 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:29:49.720455 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:29:49.720939 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:29:49.724433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:29:49.728937 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:29:49.732379 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:29:49.738372 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:29:49.742435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:29:49.754097 dbus-daemon[1470]: [system] SELinux support is enabled Nov 8 00:29:49.755364 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:29:49.759231 coreos-metadata[1469]: Nov 08 00:29:49.759 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:29:49.760350 coreos-metadata[1469]: Nov 08 00:29:49.760 INFO Fetch successful Nov 8 00:29:49.760471 coreos-metadata[1469]: Nov 08 00:29:49.760 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:29:49.760755 coreos-metadata[1469]: Nov 08 00:29:49.760 INFO Fetch successful Nov 8 00:29:49.762318 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:29:49.762680 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:29:49.764398 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:29:49.768423 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:29:49.769149 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:29:49.772916 jq[1471]: false Nov 8 00:29:49.773096 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:29:49.781528 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:29:49.782280 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:29:49.785567 extend-filesystems[1472]: Found loop4 Nov 8 00:29:49.786380 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:29:49.786431 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:29:49.789381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:29:49.789416 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:29:49.794459 extend-filesystems[1472]: Found loop5 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found loop6 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found loop7 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda1 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda2 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda3 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found usr Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda4 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda6 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda7 Nov 8 00:29:49.794459 extend-filesystems[1472]: Found sda9 Nov 8 00:29:49.794459 extend-filesystems[1472]: Checking size of /dev/sda9 Nov 8 00:29:49.814099 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:29:49.814235 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:29:49.820348 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:29:49.820509 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:29:49.827007 jq[1488]: true Nov 8 00:29:49.835565 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:29:49.847317 extend-filesystems[1472]: Resized partition /dev/sda9 Nov 8 00:29:49.859792 tar[1499]: linux-amd64/LICENSE Nov 8 00:29:49.859792 tar[1499]: linux-amd64/helm Nov 8 00:29:49.860018 extend-filesystems[1515]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:29:49.867965 update_engine[1487]: I20251108 00:29:49.861013 1487 main.cc:92] Flatcar Update Engine starting Nov 8 00:29:49.867965 update_engine[1487]: I20251108 00:29:49.867605 1487 update_check_scheduler.cc:74] Next update check in 9m6s Nov 8 00:29:49.877828 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:29:49.866604 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:29:49.877912 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:29:49.893286 jq[1503]: true Nov 8 00:29:49.906938 systemd-logind[1486]: New seat seat0. Nov 8 00:29:49.909930 systemd-logind[1486]: Watching system buttons on /dev/input/event2 (Power Button) Nov 8 00:29:49.910024 systemd-logind[1486]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:29:49.910178 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:29:49.923625 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:29:49.928122 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:29:49.967235 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1407) Nov 8 00:29:50.033276 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:29:50.064011 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:50.037562 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:29:50.044567 systemd[1]: Starting sshkeys.service... Nov 8 00:29:50.069269 extend-filesystems[1515]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:29:50.069269 extend-filesystems[1515]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:29:50.069269 extend-filesystems[1515]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:29:50.068705 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:29:50.070114 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Nov 8 00:29:50.070114 extend-filesystems[1472]: Found sr0 Nov 8 00:29:50.068834 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:29:50.080135 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:29:50.091748 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:29:50.132574 coreos-metadata[1552]: Nov 08 00:29:50.132 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:29:50.133902 coreos-metadata[1552]: Nov 08 00:29:50.133 INFO Fetch successful Nov 8 00:29:50.137154 unknown[1552]: wrote ssh authorized keys file for user: core Nov 8 00:29:50.145619 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:29:50.152317 containerd[1500]: time="2025-11-08T00:29:50.152237038Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:29:50.158500 update-ssh-keys[1556]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:29:50.158829 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:29:50.161077 systemd[1]: Finished sshkeys.service. Nov 8 00:29:50.195652 containerd[1500]: time="2025-11-08T00:29:50.195477345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.197204 containerd[1500]: time="2025-11-08T00:29:50.197097084Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:50.197204 containerd[1500]: time="2025-11-08T00:29:50.197121509Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:29:50.197204 containerd[1500]: time="2025-11-08T00:29:50.197134704Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:29:50.197311 containerd[1500]: time="2025-11-08T00:29:50.197288152Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:29:50.197311 containerd[1500]: time="2025-11-08T00:29:50.197304112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.197627 containerd[1500]: time="2025-11-08T00:29:50.197356530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:50.197627 containerd[1500]: time="2025-11-08T00:29:50.197371929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.199415 containerd[1500]: time="2025-11-08T00:29:50.199369025Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:50.199459 containerd[1500]: time="2025-11-08T00:29:50.199416844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.199459 containerd[1500]: time="2025-11-08T00:29:50.199440719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:50.199516 containerd[1500]: time="2025-11-08T00:29:50.199456779Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.199541 containerd[1500]: time="2025-11-08T00:29:50.199533163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.200007 containerd[1500]: time="2025-11-08T00:29:50.199702149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:29:50.200007 containerd[1500]: time="2025-11-08T00:29:50.199788802Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:29:50.200007 containerd[1500]: time="2025-11-08T00:29:50.199803119Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:29:50.200007 containerd[1500]: time="2025-11-08T00:29:50.199860436Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:29:50.200007 containerd[1500]: time="2025-11-08T00:29:50.199902896Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:29:50.204840 containerd[1500]: time="2025-11-08T00:29:50.204805501Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:29:50.204914 containerd[1500]: time="2025-11-08T00:29:50.204890430Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:29:50.204961 containerd[1500]: time="2025-11-08T00:29:50.204916519Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:29:50.204990 containerd[1500]: time="2025-11-08T00:29:50.204968998Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:29:50.205015 containerd[1500]: time="2025-11-08T00:29:50.204992291Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:29:50.205306 containerd[1500]: time="2025-11-08T00:29:50.205116814Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206414578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206521920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206535295Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206545805Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206557136Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206566754Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206575330Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206585499Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206596009Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206605166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206614293Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206623600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206638980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.206836 containerd[1500]: time="2025-11-08T00:29:50.206652074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206661351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206671370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206681259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206691799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206700905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206710023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206719721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206731282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206739999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206748615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206769474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206782248Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206799050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206807816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207065 containerd[1500]: time="2025-11-08T00:29:50.206815902Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206855515Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206869351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206877707Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206886343Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206893016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206948931Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206959200Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:29:50.207271 containerd[1500]: time="2025-11-08T00:29:50.206967455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:29:50.207398 containerd[1500]: time="2025-11-08T00:29:50.207170015Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:29:50.209654 containerd[1500]: time="2025-11-08T00:29:50.209261939Z" level=info msg="Connect containerd service" Nov 8 00:29:50.209654 containerd[1500]: time="2025-11-08T00:29:50.209297625Z" level=info msg="using legacy CRI server" Nov 8 00:29:50.209654 containerd[1500]: time="2025-11-08T00:29:50.209305039Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:29:50.209654 containerd[1500]: time="2025-11-08T00:29:50.209380111Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:29:50.210009 containerd[1500]: time="2025-11-08T00:29:50.209939700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:29:50.210086 containerd[1500]: time="2025-11-08T00:29:50.210048815Z" level=info msg="Start subscribing containerd event" Nov 8 00:29:50.210111 containerd[1500]: time="2025-11-08T00:29:50.210093418Z" level=info msg="Start recovering state" Nov 8 00:29:50.210272 containerd[1500]: time="2025-11-08T00:29:50.210137291Z" level=info msg="Start event monitor" Nov 8 00:29:50.210272 containerd[1500]: time="2025-11-08T00:29:50.210152760Z" level=info msg="Start snapshots syncer" Nov 8 00:29:50.210272 containerd[1500]: time="2025-11-08T00:29:50.210159773Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:29:50.210272 containerd[1500]: time="2025-11-08T00:29:50.210165453Z" level=info msg="Start streaming server" Nov 8 00:29:50.215155 containerd[1500]: time="2025-11-08T00:29:50.210480785Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:29:50.215155 containerd[1500]: time="2025-11-08T00:29:50.210517013Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:29:50.215155 containerd[1500]: time="2025-11-08T00:29:50.210677333Z" level=info msg="containerd successfully booted in 0.059389s" Nov 8 00:29:50.210775 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:29:50.491858 tar[1499]: linux-amd64/README.md Nov 8 00:29:50.504697 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:29:50.561671 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:29:50.596862 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:29:50.606212 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:29:50.613006 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:29:50.613341 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:29:50.618603 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:29:50.636716 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:29:50.643544 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:29:50.655738 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:29:50.658340 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:29:50.928614 systemd-networkd[1398]: eth1: Gained IPv6LL Nov 8 00:29:50.929464 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:29:50.932735 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:29:50.934206 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:29:50.957596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:29:50.961587 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:29:50.991184 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:29:51.184746 systemd-networkd[1398]: eth0: Gained IPv6LL Nov 8 00:29:51.185849 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:29:51.749645 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:29:51.757813 systemd[1]: Started sshd@0-157.180.31.220:22-147.75.109.163:43238.service - OpenSSH per-connection server daemon (147.75.109.163:43238). Nov 8 00:29:51.866889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:29:51.868121 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:29:51.873060 systemd[1]: Startup finished in 1.258s (kernel) + 4.925s (initrd) + 4.286s (userspace) = 10.470s. Nov 8 00:29:51.878822 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:29:52.384381 kubelet[1603]: E1108 00:29:52.384313 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:29:52.386811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:29:52.386925 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:29:52.882752 sshd[1597]: Accepted publickey for core from 147.75.109.163 port 43238 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:52.884995 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:52.898813 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:29:52.903721 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:29:52.906603 systemd-logind[1486]: New session 1 of user core. Nov 8 00:29:52.916618 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:29:52.922513 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:29:52.925998 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:29:53.019959 systemd[1617]: Queued start job for default target default.target. Nov 8 00:29:53.030116 systemd[1617]: Created slice app.slice - User Application Slice. Nov 8 00:29:53.030141 systemd[1617]: Reached target paths.target - Paths. Nov 8 00:29:53.030152 systemd[1617]: Reached target timers.target - Timers. Nov 8 00:29:53.031293 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:29:53.041177 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:29:53.041217 systemd[1617]: Reached target sockets.target - Sockets. Nov 8 00:29:53.041229 systemd[1617]: Reached target basic.target - Basic System. Nov 8 00:29:53.041279 systemd[1617]: Reached target default.target - Main User Target. Nov 8 00:29:53.041302 systemd[1617]: Startup finished in 110ms. Nov 8 00:29:53.041647 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:29:53.046388 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:29:53.831844 systemd[1]: Started sshd@1-157.180.31.220:22-147.75.109.163:43250.service - OpenSSH per-connection server daemon (147.75.109.163:43250). Nov 8 00:29:54.938609 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 43250 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:54.940606 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:54.948001 systemd-logind[1486]: New session 2 of user core. Nov 8 00:29:54.954528 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:29:55.698234 sshd[1628]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:55.701164 systemd[1]: sshd@1-157.180.31.220:22-147.75.109.163:43250.service: Deactivated successfully. Nov 8 00:29:55.703967 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:29:55.705482 systemd-logind[1486]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:29:55.706915 systemd-logind[1486]: Removed session 2. Nov 8 00:29:55.871684 systemd[1]: Started sshd@2-157.180.31.220:22-147.75.109.163:43254.service - OpenSSH per-connection server daemon (147.75.109.163:43254). Nov 8 00:29:56.869503 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 43254 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:56.871145 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:56.875517 systemd-logind[1486]: New session 3 of user core. Nov 8 00:29:56.883397 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:29:57.558141 sshd[1635]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:57.561788 systemd[1]: sshd@2-157.180.31.220:22-147.75.109.163:43254.service: Deactivated successfully. Nov 8 00:29:57.563557 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:29:57.564306 systemd-logind[1486]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:29:57.565640 systemd-logind[1486]: Removed session 3. Nov 8 00:29:57.741854 systemd[1]: Started sshd@3-157.180.31.220:22-147.75.109.163:43256.service - OpenSSH per-connection server daemon (147.75.109.163:43256). Nov 8 00:29:58.742618 sshd[1642]: Accepted publickey for core from 147.75.109.163 port 43256 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:29:58.744036 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:29:58.747811 systemd-logind[1486]: New session 4 of user core. Nov 8 00:29:58.761459 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:29:59.440805 sshd[1642]: pam_unix(sshd:session): session closed for user core Nov 8 00:29:59.445235 systemd[1]: sshd@3-157.180.31.220:22-147.75.109.163:43256.service: Deactivated successfully. Nov 8 00:29:59.447101 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:29:59.448035 systemd-logind[1486]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:29:59.449628 systemd-logind[1486]: Removed session 4. Nov 8 00:29:59.613417 systemd[1]: Started sshd@4-157.180.31.220:22-147.75.109.163:43258.service - OpenSSH per-connection server daemon (147.75.109.163:43258). Nov 8 00:30:00.614224 sshd[1649]: Accepted publickey for core from 147.75.109.163 port 43258 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:30:00.615514 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:00.620026 systemd-logind[1486]: New session 5 of user core. Nov 8 00:30:00.629421 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:30:01.153066 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:30:01.153419 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:01.169466 sudo[1652]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:01.332287 sshd[1649]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:01.336614 systemd-logind[1486]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:30:01.337383 systemd[1]: sshd@4-157.180.31.220:22-147.75.109.163:43258.service: Deactivated successfully. Nov 8 00:30:01.339222 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:30:01.340463 systemd-logind[1486]: Removed session 5. Nov 8 00:30:01.540426 systemd[1]: Started sshd@5-157.180.31.220:22-147.75.109.163:59000.service - OpenSSH per-connection server daemon (147.75.109.163:59000). Nov 8 00:30:02.467981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:30:02.476512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:02.576206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:02.579196 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:02.615682 kubelet[1667]: E1108 00:30:02.615588 1667 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:02.618799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:02.618958 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:02.658131 sshd[1657]: Accepted publickey for core from 147.75.109.163 port 59000 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:30:02.660113 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:02.665391 systemd-logind[1486]: New session 6 of user core. Nov 8 00:30:02.678514 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:30:03.248951 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:30:03.249350 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:03.252997 sudo[1676]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:03.258136 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:30:03.258414 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:03.272466 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:03.275393 auditctl[1679]: No rules Nov 8 00:30:03.275725 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:30:03.275889 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:03.277888 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:30:03.300885 augenrules[1697]: No rules Nov 8 00:30:03.302200 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:30:03.303747 sudo[1675]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:03.485387 sshd[1657]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:03.488405 systemd[1]: sshd@5-157.180.31.220:22-147.75.109.163:59000.service: Deactivated successfully. Nov 8 00:30:03.489641 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:30:03.490637 systemd-logind[1486]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:30:03.491568 systemd-logind[1486]: Removed session 6. Nov 8 00:30:03.691860 systemd[1]: Started sshd@6-157.180.31.220:22-147.75.109.163:59010.service - OpenSSH per-connection server daemon (147.75.109.163:59010). Nov 8 00:30:04.809913 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 59010 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:30:04.811186 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:30:04.815798 systemd-logind[1486]: New session 7 of user core. Nov 8 00:30:04.818396 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:30:05.399103 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:30:05.399412 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:30:05.657439 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:30:05.657945 (dockerd)[1725]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:30:05.917841 dockerd[1725]: time="2025-11-08T00:30:05.917339879Z" level=info msg="Starting up" Nov 8 00:30:06.019148 dockerd[1725]: time="2025-11-08T00:30:06.019098768Z" level=info msg="Loading containers: start." Nov 8 00:30:06.109282 kernel: Initializing XFRM netlink socket Nov 8 00:30:06.132655 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Nov 8 00:30:06.179558 systemd-networkd[1398]: docker0: Link UP Nov 8 00:30:06.198447 dockerd[1725]: time="2025-11-08T00:30:06.198406201Z" level=info msg="Loading containers: done." Nov 8 00:30:06.213859 systemd-timesyncd[1382]: Contacted time server 144.76.59.106:123 (2.flatcar.pool.ntp.org). Nov 8 00:30:06.213949 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1373014268-merged.mount: Deactivated successfully. Nov 8 00:30:06.214071 systemd-timesyncd[1382]: Initial clock synchronization to Sat 2025-11-08 00:30:06.021819 UTC. Nov 8 00:30:06.215555 dockerd[1725]: time="2025-11-08T00:30:06.215499610Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:30:06.215621 dockerd[1725]: time="2025-11-08T00:30:06.215605268Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:30:06.215715 dockerd[1725]: time="2025-11-08T00:30:06.215689807Z" level=info msg="Daemon has completed initialization" Nov 8 00:30:06.246231 dockerd[1725]: time="2025-11-08T00:30:06.246175253Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:30:06.246583 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:30:07.090414 containerd[1500]: time="2025-11-08T00:30:07.090364125Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:30:07.625301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount342014622.mount: Deactivated successfully. Nov 8 00:30:08.489175 containerd[1500]: time="2025-11-08T00:30:08.489119760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.490051 containerd[1500]: time="2025-11-08T00:30:08.490012389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065492" Nov 8 00:30:08.491085 containerd[1500]: time="2025-11-08T00:30:08.490803874Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.493099 containerd[1500]: time="2025-11-08T00:30:08.493078914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:08.493977 containerd[1500]: time="2025-11-08T00:30:08.493950170Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.403541256s" Nov 8 00:30:08.494017 containerd[1500]: time="2025-11-08T00:30:08.493981342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:30:08.494435 containerd[1500]: time="2025-11-08T00:30:08.494412851Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:30:09.450330 containerd[1500]: time="2025-11-08T00:30:09.450224690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.455769 containerd[1500]: time="2025-11-08T00:30:09.455631057Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159779" Nov 8 00:30:09.457655 containerd[1500]: time="2025-11-08T00:30:09.457607036Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.461170 containerd[1500]: time="2025-11-08T00:30:09.460781668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:09.461692 containerd[1500]: time="2025-11-08T00:30:09.461662616Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 967.223732ms" Nov 8 00:30:09.461739 containerd[1500]: time="2025-11-08T00:30:09.461691303Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:30:09.466069 containerd[1500]: time="2025-11-08T00:30:09.466033785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:30:10.406156 containerd[1500]: time="2025-11-08T00:30:10.406082453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.407539 containerd[1500]: time="2025-11-08T00:30:10.407270236Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725115" Nov 8 00:30:10.408475 containerd[1500]: time="2025-11-08T00:30:10.408448436Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.411975 containerd[1500]: time="2025-11-08T00:30:10.411929981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:10.413269 containerd[1500]: time="2025-11-08T00:30:10.412752026Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 946.681374ms" Nov 8 00:30:10.413269 containerd[1500]: time="2025-11-08T00:30:10.412776591Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:30:10.414058 containerd[1500]: time="2025-11-08T00:30:10.414006969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:30:11.393355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750784001.mount: Deactivated successfully. Nov 8 00:30:11.616453 containerd[1500]: time="2025-11-08T00:30:11.616395135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.617495 containerd[1500]: time="2025-11-08T00:30:11.617460187Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964727" Nov 8 00:30:11.617801 containerd[1500]: time="2025-11-08T00:30:11.617705328Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.621529 containerd[1500]: time="2025-11-08T00:30:11.621504353Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.207445241s" Nov 8 00:30:11.622379 containerd[1500]: time="2025-11-08T00:30:11.621642694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:30:11.622379 containerd[1500]: time="2025-11-08T00:30:11.621899046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:11.622587 containerd[1500]: time="2025-11-08T00:30:11.622551526Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:30:12.122020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4011937508.mount: Deactivated successfully. Nov 8 00:30:12.836801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:30:12.845729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:12.939000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:12.941034 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:30:12.975849 kubelet[1996]: E1108 00:30:12.975587 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:30:12.977346 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:30:12.977454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:30:13.029059 containerd[1500]: time="2025-11-08T00:30:13.028987739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.030064 containerd[1500]: time="2025-11-08T00:30:13.029867031Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388101" Nov 8 00:30:13.031189 containerd[1500]: time="2025-11-08T00:30:13.030894620Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.033144 containerd[1500]: time="2025-11-08T00:30:13.033111952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.034156 containerd[1500]: time="2025-11-08T00:30:13.033946069Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.411369874s" Nov 8 00:30:13.034156 containerd[1500]: time="2025-11-08T00:30:13.033976952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:30:13.034387 containerd[1500]: time="2025-11-08T00:30:13.034366681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:30:13.491394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435216845.mount: Deactivated successfully. Nov 8 00:30:13.500109 containerd[1500]: time="2025-11-08T00:30:13.500040866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.500964 containerd[1500]: time="2025-11-08T00:30:13.500884795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321240" Nov 8 00:30:13.502921 containerd[1500]: time="2025-11-08T00:30:13.501749468Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.504599 containerd[1500]: time="2025-11-08T00:30:13.503808304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:13.504599 containerd[1500]: time="2025-11-08T00:30:13.504499157Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 470.108889ms" Nov 8 00:30:13.504599 containerd[1500]: time="2025-11-08T00:30:13.504525441Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:30:13.505211 containerd[1500]: time="2025-11-08T00:30:13.505182766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:30:15.446872 containerd[1500]: time="2025-11-08T00:30:15.446816571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:15.447997 containerd[1500]: time="2025-11-08T00:30:15.447795261Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514639" Nov 8 00:30:15.450278 containerd[1500]: time="2025-11-08T00:30:15.448799864Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:15.452228 containerd[1500]: time="2025-11-08T00:30:15.452202579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:15.453139 containerd[1500]: time="2025-11-08T00:30:15.453109390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 1.947897387s" Nov 8 00:30:15.453214 containerd[1500]: time="2025-11-08T00:30:15.453140679Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:30:18.675334 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:18.683507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:18.711395 systemd[1]: Reloading requested from client PID 2076 ('systemctl') (unit session-7.scope)... Nov 8 00:30:18.711412 systemd[1]: Reloading... Nov 8 00:30:18.809419 zram_generator::config[2112]: No configuration found. Nov 8 00:30:18.920015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:19.004788 systemd[1]: Reloading finished in 293 ms. Nov 8 00:30:19.050723 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:30:19.050787 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:30:19.051131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:19.056471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:19.136756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:19.141391 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:19.176572 kubelet[2170]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:19.176572 kubelet[2170]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:19.178046 kubelet[2170]: I1108 00:30:19.176906 2170 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:19.726161 kubelet[2170]: I1108 00:30:19.726102 2170 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:30:19.726161 kubelet[2170]: I1108 00:30:19.726132 2170 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:19.726161 kubelet[2170]: I1108 00:30:19.726152 2170 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:30:19.726161 kubelet[2170]: I1108 00:30:19.726157 2170 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:19.726543 kubelet[2170]: I1108 00:30:19.726378 2170 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:30:19.742796 kubelet[2170]: E1108 00:30:19.742732 2170 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://157.180.31.220:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:30:19.744575 kubelet[2170]: I1108 00:30:19.743667 2170 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:19.750141 kubelet[2170]: E1108 00:30:19.750094 2170 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:19.750224 kubelet[2170]: I1108 00:30:19.750149 2170 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:19.755874 kubelet[2170]: I1108 00:30:19.755838 2170 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:30:19.756610 kubelet[2170]: I1108 00:30:19.756550 2170 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:19.757820 kubelet[2170]: I1108 00:30:19.756579 2170 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-6ee8ddef06","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:19.757820 kubelet[2170]: I1108 00:30:19.757798 2170 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:19.757820 kubelet[2170]: I1108 00:30:19.757807 2170 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:30:19.758014 kubelet[2170]: I1108 00:30:19.757887 2170 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:30:19.760446 kubelet[2170]: I1108 00:30:19.760403 2170 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:19.761636 kubelet[2170]: I1108 00:30:19.761596 2170 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:30:19.761636 kubelet[2170]: I1108 00:30:19.761614 2170 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:19.761636 kubelet[2170]: I1108 00:30:19.761637 2170 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:30:19.762887 kubelet[2170]: I1108 00:30:19.761651 2170 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:19.764156 kubelet[2170]: E1108 00:30:19.763476 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.31.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:30:19.764156 kubelet[2170]: E1108 00:30:19.763682 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.31.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-6ee8ddef06&limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:30:19.764156 kubelet[2170]: I1108 00:30:19.763961 2170 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:19.766091 kubelet[2170]: I1108 00:30:19.765401 2170 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:30:19.766091 kubelet[2170]: I1108 00:30:19.765429 2170 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:30:19.766091 kubelet[2170]: W1108 00:30:19.765465 2170 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:30:19.770272 kubelet[2170]: I1108 00:30:19.768710 2170 server.go:1262] "Started kubelet" Nov 8 00:30:19.771988 kubelet[2170]: I1108 00:30:19.771942 2170 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:19.778517 kubelet[2170]: I1108 00:30:19.777997 2170 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:19.779914 kubelet[2170]: I1108 00:30:19.779881 2170 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:30:19.780279 kubelet[2170]: E1108 00:30:19.777187 2170 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.31.220:6443/api/v1/namespaces/default/events\": dial tcp 157.180.31.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-6ee8ddef06.1875e0a2f9fa9877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-6ee8ddef06,UID:ci-4081-3-6-n-6ee8ddef06,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-6ee8ddef06,},FirstTimestamp:2025-11-08 00:30:19.768690807 +0000 UTC m=+0.624199178,LastTimestamp:2025-11-08 00:30:19.768690807 +0000 UTC m=+0.624199178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-6ee8ddef06,}" Nov 8 00:30:19.787880 kubelet[2170]: I1108 00:30:19.787725 2170 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:19.788038 kubelet[2170]: I1108 00:30:19.788016 2170 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:30:19.788165 kubelet[2170]: I1108 00:30:19.788138 2170 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:30:19.788437 kubelet[2170]: E1108 00:30:19.788386 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:19.788591 kubelet[2170]: I1108 00:30:19.788571 2170 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:19.790230 kubelet[2170]: I1108 00:30:19.790177 2170 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:19.791569 kubelet[2170]: I1108 00:30:19.791546 2170 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:30:19.791642 kubelet[2170]: I1108 00:30:19.791606 2170 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:30:19.795944 kubelet[2170]: E1108 00:30:19.795918 2170 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:30:19.796565 kubelet[2170]: I1108 00:30:19.796544 2170 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:30:19.797323 kubelet[2170]: I1108 00:30:19.797293 2170 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:19.800345 kubelet[2170]: I1108 00:30:19.800305 2170 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:30:19.810031 kubelet[2170]: E1108 00:30:19.809670 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.31.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6ee8ddef06?timeout=10s\": dial tcp 157.180.31.220:6443: connect: connection refused" interval="200ms" Nov 8 00:30:19.811979 kubelet[2170]: E1108 00:30:19.810445 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.31.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:30:19.812415 kubelet[2170]: I1108 00:30:19.812359 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:19.814638 kubelet[2170]: I1108 00:30:19.814152 2170 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:19.814638 kubelet[2170]: I1108 00:30:19.814168 2170 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:30:19.814638 kubelet[2170]: I1108 00:30:19.814186 2170 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:30:19.814638 kubelet[2170]: E1108 00:30:19.814211 2170 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:19.817596 kubelet[2170]: I1108 00:30:19.817568 2170 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:19.817596 kubelet[2170]: I1108 00:30:19.817580 2170 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:19.817596 kubelet[2170]: I1108 00:30:19.817593 2170 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:19.820000 kubelet[2170]: I1108 00:30:19.819961 2170 policy_none.go:49] "None policy: Start" Nov 8 00:30:19.820000 kubelet[2170]: I1108 00:30:19.819980 2170 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:30:19.820000 kubelet[2170]: I1108 00:30:19.819989 2170 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:30:19.821712 kubelet[2170]: I1108 00:30:19.821676 2170 policy_none.go:47] "Start" Nov 8 00:30:19.823811 kubelet[2170]: E1108 00:30:19.823772 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.31.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:30:19.828772 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:30:19.843641 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:30:19.846692 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:30:19.855901 kubelet[2170]: E1108 00:30:19.855843 2170 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:30:19.856043 kubelet[2170]: I1108 00:30:19.856021 2170 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:19.856086 kubelet[2170]: I1108 00:30:19.856033 2170 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:19.856907 kubelet[2170]: I1108 00:30:19.856594 2170 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:19.858385 kubelet[2170]: E1108 00:30:19.858357 2170 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:19.858385 kubelet[2170]: E1108 00:30:19.858391 2170 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:19.885306 kubelet[2170]: E1108 00:30:19.885133 2170 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://157.180.31.220:6443/api/v1/namespaces/default/events\": dial tcp 157.180.31.220:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-6ee8ddef06.1875e0a2f9fa9877 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-6ee8ddef06,UID:ci-4081-3-6-n-6ee8ddef06,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-6ee8ddef06,},FirstTimestamp:2025-11-08 00:30:19.768690807 +0000 UTC m=+0.624199178,LastTimestamp:2025-11-08 00:30:19.768690807 +0000 UTC m=+0.624199178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-6ee8ddef06,}" Nov 8 00:30:19.935519 systemd[1]: Created slice kubepods-burstable-poddf19481ededd2bc80170a80d96b1ee36.slice - libcontainer container kubepods-burstable-poddf19481ededd2bc80170a80d96b1ee36.slice. Nov 8 00:30:19.959339 kubelet[2170]: E1108 00:30:19.957480 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.959857 kubelet[2170]: I1108 00:30:19.959796 2170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.960443 kubelet[2170]: E1108 00:30:19.960394 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.31.220:6443/api/v1/nodes\": dial tcp 157.180.31.220:6443: connect: connection refused" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.968541 systemd[1]: Created slice kubepods-burstable-pod5023b177c7296b68519f4204ec63971a.slice - libcontainer container kubepods-burstable-pod5023b177c7296b68519f4204ec63971a.slice. Nov 8 00:30:19.973700 kubelet[2170]: E1108 00:30:19.973650 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.976718 systemd[1]: Created slice kubepods-burstable-pod3740f220f7359f63ea0f1e551097223e.slice - libcontainer container kubepods-burstable-pod3740f220f7359f63ea0f1e551097223e.slice. Nov 8 00:30:19.981606 kubelet[2170]: E1108 00:30:19.981228 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992673 kubelet[2170]: I1108 00:30:19.992587 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992673 kubelet[2170]: I1108 00:30:19.992652 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992936 kubelet[2170]: I1108 00:30:19.992685 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992936 kubelet[2170]: I1108 00:30:19.992716 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992936 kubelet[2170]: I1108 00:30:19.992745 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992936 kubelet[2170]: I1108 00:30:19.992769 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.992936 kubelet[2170]: I1108 00:30:19.992792 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.993387 kubelet[2170]: I1108 00:30:19.992814 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:19.993387 kubelet[2170]: I1108 00:30:19.992836 2170 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5023b177c7296b68519f4204ec63971a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-6ee8ddef06\" (UID: \"5023b177c7296b68519f4204ec63971a\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:20.011112 kubelet[2170]: E1108 00:30:20.011019 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.31.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6ee8ddef06?timeout=10s\": dial tcp 157.180.31.220:6443: connect: connection refused" interval="400ms" Nov 8 00:30:20.162718 kubelet[2170]: I1108 00:30:20.162678 2170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:20.163007 kubelet[2170]: E1108 00:30:20.162981 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.31.220:6443/api/v1/nodes\": dial tcp 157.180.31.220:6443: connect: connection refused" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:20.263693 containerd[1500]: time="2025-11-08T00:30:20.263520448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-6ee8ddef06,Uid:df19481ededd2bc80170a80d96b1ee36,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:20.287717 containerd[1500]: time="2025-11-08T00:30:20.287622184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-6ee8ddef06,Uid:5023b177c7296b68519f4204ec63971a,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:20.288876 containerd[1500]: time="2025-11-08T00:30:20.288770325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-6ee8ddef06,Uid:3740f220f7359f63ea0f1e551097223e,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:20.412430 kubelet[2170]: E1108 00:30:20.412375 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.31.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6ee8ddef06?timeout=10s\": dial tcp 157.180.31.220:6443: connect: connection refused" interval="800ms" Nov 8 00:30:20.566026 kubelet[2170]: I1108 00:30:20.565892 2170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:20.566404 kubelet[2170]: E1108 00:30:20.566278 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.31.220:6443/api/v1/nodes\": dial tcp 157.180.31.220:6443: connect: connection refused" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:20.608524 kubelet[2170]: E1108 00:30:20.608474 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://157.180.31.220:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:30:20.738585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1450443082.mount: Deactivated successfully. Nov 8 00:30:20.747462 containerd[1500]: time="2025-11-08T00:30:20.747366062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:20.751191 containerd[1500]: time="2025-11-08T00:30:20.751120180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 8 00:30:20.752066 containerd[1500]: time="2025-11-08T00:30:20.752016795Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:20.753523 containerd[1500]: time="2025-11-08T00:30:20.753451058Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:20.757053 containerd[1500]: time="2025-11-08T00:30:20.755284151Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:20.757053 containerd[1500]: time="2025-11-08T00:30:20.755389493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:20.757053 containerd[1500]: time="2025-11-08T00:30:20.756346203Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:30:20.758289 containerd[1500]: time="2025-11-08T00:30:20.757933707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:30:20.760685 containerd[1500]: time="2025-11-08T00:30:20.760589667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 471.710729ms" Nov 8 00:30:20.762684 containerd[1500]: time="2025-11-08T00:30:20.762520289Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.875036ms" Nov 8 00:30:20.764283 containerd[1500]: time="2025-11-08T00:30:20.763921591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 476.154629ms" Nov 8 00:30:20.868883 containerd[1500]: time="2025-11-08T00:30:20.866485023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:20.868883 containerd[1500]: time="2025-11-08T00:30:20.866661675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:20.868883 containerd[1500]: time="2025-11-08T00:30:20.866695895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.868883 containerd[1500]: time="2025-11-08T00:30:20.867241885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.879296 containerd[1500]: time="2025-11-08T00:30:20.876109503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:20.879296 containerd[1500]: time="2025-11-08T00:30:20.876147513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:20.879296 containerd[1500]: time="2025-11-08T00:30:20.876158097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.879296 containerd[1500]: time="2025-11-08T00:30:20.878323756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.888938 containerd[1500]: time="2025-11-08T00:30:20.888866170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:20.889673 containerd[1500]: time="2025-11-08T00:30:20.888912909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:20.891432 systemd[1]: Started cri-containerd-d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c.scope - libcontainer container d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c. Nov 8 00:30:20.892852 containerd[1500]: time="2025-11-08T00:30:20.892681753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.892852 containerd[1500]: time="2025-11-08T00:30:20.892751228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:20.911381 systemd[1]: Started cri-containerd-c24c213fa1ce74a490ce4edbd5abc2c31e8782e14cf22e97b29c89b4b4d3adf9.scope - libcontainer container c24c213fa1ce74a490ce4edbd5abc2c31e8782e14cf22e97b29c89b4b4d3adf9. Nov 8 00:30:20.928368 systemd[1]: Started cri-containerd-3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6.scope - libcontainer container 3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6. Nov 8 00:30:20.947112 kubelet[2170]: E1108 00:30:20.947062 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://157.180.31.220:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:30:20.952277 containerd[1500]: time="2025-11-08T00:30:20.952229804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-6ee8ddef06,Uid:3740f220f7359f63ea0f1e551097223e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c\"" Nov 8 00:30:20.963059 containerd[1500]: time="2025-11-08T00:30:20.963020043Z" level=info msg="CreateContainer within sandbox \"d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:30:20.971657 containerd[1500]: time="2025-11-08T00:30:20.971613658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-6ee8ddef06,Uid:df19481ededd2bc80170a80d96b1ee36,Namespace:kube-system,Attempt:0,} returns sandbox id \"c24c213fa1ce74a490ce4edbd5abc2c31e8782e14cf22e97b29c89b4b4d3adf9\"" Nov 8 00:30:20.977788 containerd[1500]: time="2025-11-08T00:30:20.977751349Z" level=info msg="CreateContainer within sandbox \"c24c213fa1ce74a490ce4edbd5abc2c31e8782e14cf22e97b29c89b4b4d3adf9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:30:20.983292 containerd[1500]: time="2025-11-08T00:30:20.983266949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-6ee8ddef06,Uid:5023b177c7296b68519f4204ec63971a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6\"" Nov 8 00:30:20.988865 containerd[1500]: time="2025-11-08T00:30:20.988819472Z" level=info msg="CreateContainer within sandbox \"d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0\"" Nov 8 00:30:20.989387 containerd[1500]: time="2025-11-08T00:30:20.989369982Z" level=info msg="CreateContainer within sandbox \"3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:30:20.989824 containerd[1500]: time="2025-11-08T00:30:20.989800954Z" level=info msg="StartContainer for \"1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0\"" Nov 8 00:30:20.998119 containerd[1500]: time="2025-11-08T00:30:20.998087783Z" level=info msg="CreateContainer within sandbox \"c24c213fa1ce74a490ce4edbd5abc2c31e8782e14cf22e97b29c89b4b4d3adf9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e072ec94362d26fafda5ce5fc00cd3ee588c68484cea82edcbec4c0b451d3d5b\"" Nov 8 00:30:20.998821 containerd[1500]: time="2025-11-08T00:30:20.998788499Z" level=info msg="StartContainer for \"e072ec94362d26fafda5ce5fc00cd3ee588c68484cea82edcbec4c0b451d3d5b\"" Nov 8 00:30:21.003469 containerd[1500]: time="2025-11-08T00:30:21.003423354Z" level=info msg="CreateContainer within sandbox \"3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e\"" Nov 8 00:30:21.004019 containerd[1500]: time="2025-11-08T00:30:21.003999003Z" level=info msg="StartContainer for \"92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e\"" Nov 8 00:30:21.021623 systemd[1]: Started cri-containerd-1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0.scope - libcontainer container 1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0. Nov 8 00:30:21.031374 systemd[1]: Started cri-containerd-e072ec94362d26fafda5ce5fc00cd3ee588c68484cea82edcbec4c0b451d3d5b.scope - libcontainer container e072ec94362d26fafda5ce5fc00cd3ee588c68484cea82edcbec4c0b451d3d5b. Nov 8 00:30:21.040409 systemd[1]: Started cri-containerd-92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e.scope - libcontainer container 92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e. Nov 8 00:30:21.073008 containerd[1500]: time="2025-11-08T00:30:21.071814850Z" level=info msg="StartContainer for \"1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0\" returns successfully" Nov 8 00:30:21.117593 containerd[1500]: time="2025-11-08T00:30:21.117193018Z" level=info msg="StartContainer for \"92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e\" returns successfully" Nov 8 00:30:21.117593 containerd[1500]: time="2025-11-08T00:30:21.117197889Z" level=info msg="StartContainer for \"e072ec94362d26fafda5ce5fc00cd3ee588c68484cea82edcbec4c0b451d3d5b\" returns successfully" Nov 8 00:30:21.213491 kubelet[2170]: E1108 00:30:21.212766 2170 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://157.180.31.220:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-6ee8ddef06?timeout=10s\": dial tcp 157.180.31.220:6443: connect: connection refused" interval="1.6s" Nov 8 00:30:21.238494 kubelet[2170]: E1108 00:30:21.238453 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://157.180.31.220:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-6ee8ddef06&limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:30:21.267088 kubelet[2170]: E1108 00:30:21.267022 2170 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://157.180.31.220:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 157.180.31.220:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:30:21.369714 kubelet[2170]: I1108 00:30:21.369119 2170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:21.370305 kubelet[2170]: E1108 00:30:21.370277 2170 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://157.180.31.220:6443/api/v1/nodes\": dial tcp 157.180.31.220:6443: connect: connection refused" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:21.841870 kubelet[2170]: E1108 00:30:21.841827 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:21.846454 kubelet[2170]: E1108 00:30:21.846428 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:21.849637 kubelet[2170]: E1108 00:30:21.849606 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:22.852807 kubelet[2170]: E1108 00:30:22.852772 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:22.855390 kubelet[2170]: E1108 00:30:22.855367 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:22.972617 kubelet[2170]: I1108 00:30:22.972592 2170 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:22.976209 kubelet[2170]: E1108 00:30:22.976188 2170 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:23.088822 kubelet[2170]: I1108 00:30:23.088782 2170 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:23.088822 kubelet[2170]: E1108 00:30:23.088822 2170 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-6ee8ddef06\": node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.104554 kubelet[2170]: E1108 00:30:23.104217 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.205353 kubelet[2170]: E1108 00:30:23.205301 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.305824 kubelet[2170]: E1108 00:30:23.305704 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.406662 kubelet[2170]: E1108 00:30:23.406515 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.507469 kubelet[2170]: E1108 00:30:23.507414 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.608271 kubelet[2170]: E1108 00:30:23.608217 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.709208 kubelet[2170]: E1108 00:30:23.709089 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.809684 kubelet[2170]: E1108 00:30:23.809623 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:23.852588 kubelet[2170]: E1108 00:30:23.852553 2170 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:23.911299 kubelet[2170]: E1108 00:30:23.911219 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:24.012101 kubelet[2170]: E1108 00:30:24.011951 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:24.112544 kubelet[2170]: E1108 00:30:24.112485 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:24.213598 kubelet[2170]: E1108 00:30:24.213548 2170 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-6ee8ddef06\" not found" Nov 8 00:30:24.290330 kubelet[2170]: I1108 00:30:24.288708 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:24.308840 kubelet[2170]: I1108 00:30:24.308645 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:24.319698 kubelet[2170]: I1108 00:30:24.319635 2170 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:24.768305 kubelet[2170]: I1108 00:30:24.766890 2170 apiserver.go:52] "Watching apiserver" Nov 8 00:30:24.792221 kubelet[2170]: I1108 00:30:24.792150 2170 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:30:25.117331 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-7.scope)... Nov 8 00:30:25.117349 systemd[1]: Reloading... Nov 8 00:30:25.191299 zram_generator::config[2494]: No configuration found. Nov 8 00:30:25.284888 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:30:25.351603 systemd[1]: Reloading finished in 233 ms. Nov 8 00:30:25.378404 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:25.392068 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:30:25.392226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:25.400455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:30:25.496390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:30:25.499363 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:30:25.555693 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:30:25.555693 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:30:25.555693 kubelet[2545]: I1108 00:30:25.552857 2545 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:30:25.561060 kubelet[2545]: I1108 00:30:25.561031 2545 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:30:25.561060 kubelet[2545]: I1108 00:30:25.561049 2545 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:30:25.561144 kubelet[2545]: I1108 00:30:25.561066 2545 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:30:25.561144 kubelet[2545]: I1108 00:30:25.561074 2545 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:30:25.561241 kubelet[2545]: I1108 00:30:25.561224 2545 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:30:25.562105 kubelet[2545]: I1108 00:30:25.562086 2545 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:30:25.569014 kubelet[2545]: I1108 00:30:25.568918 2545 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:30:25.592579 kubelet[2545]: E1108 00:30:25.592529 2545 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:30:25.592677 kubelet[2545]: I1108 00:30:25.592600 2545 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:30:25.595298 kubelet[2545]: I1108 00:30:25.595283 2545 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:30:25.597311 kubelet[2545]: I1108 00:30:25.596295 2545 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:30:25.597311 kubelet[2545]: I1108 00:30:25.596323 2545 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-6ee8ddef06","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:30:25.597311 kubelet[2545]: I1108 00:30:25.596456 2545 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:30:25.597311 kubelet[2545]: I1108 00:30:25.596464 2545 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:30:25.597489 kubelet[2545]: I1108 00:30:25.596488 2545 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:30:25.597489 kubelet[2545]: I1108 00:30:25.597101 2545 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:25.598275 kubelet[2545]: I1108 00:30:25.598221 2545 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:30:25.598422 kubelet[2545]: I1108 00:30:25.598396 2545 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:30:25.598451 kubelet[2545]: I1108 00:30:25.598435 2545 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:30:25.598473 kubelet[2545]: I1108 00:30:25.598454 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:30:25.604308 kubelet[2545]: I1108 00:30:25.603491 2545 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:30:25.604308 kubelet[2545]: I1108 00:30:25.603928 2545 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:30:25.604308 kubelet[2545]: I1108 00:30:25.603953 2545 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:30:25.610273 kubelet[2545]: I1108 00:30:25.608615 2545 server.go:1262] "Started kubelet" Nov 8 00:30:25.612880 kubelet[2545]: I1108 00:30:25.612822 2545 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:30:25.613026 kubelet[2545]: I1108 00:30:25.613005 2545 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:30:25.613108 kubelet[2545]: I1108 00:30:25.613097 2545 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:30:25.613365 kubelet[2545]: I1108 00:30:25.613351 2545 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:30:25.616711 kubelet[2545]: I1108 00:30:25.616476 2545 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:30:25.626521 kubelet[2545]: I1108 00:30:25.626487 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:30:25.628269 kubelet[2545]: I1108 00:30:25.628238 2545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:30:25.631397 kubelet[2545]: I1108 00:30:25.630624 2545 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:30:25.631570 kubelet[2545]: I1108 00:30:25.631558 2545 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:30:25.631807 kubelet[2545]: I1108 00:30:25.631683 2545 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:30:25.634550 kubelet[2545]: I1108 00:30:25.634533 2545 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:30:25.636029 kubelet[2545]: E1108 00:30:25.636014 2545 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:30:25.637667 kubelet[2545]: I1108 00:30:25.637321 2545 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:30:25.637743 kubelet[2545]: I1108 00:30:25.637733 2545 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:30:25.638648 kubelet[2545]: I1108 00:30:25.638098 2545 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:30:25.639181 kubelet[2545]: I1108 00:30:25.639058 2545 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:30:25.639181 kubelet[2545]: I1108 00:30:25.639094 2545 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:30:25.639181 kubelet[2545]: I1108 00:30:25.639112 2545 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:30:25.639181 kubelet[2545]: E1108 00:30:25.639152 2545 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:30:25.683107 kubelet[2545]: I1108 00:30:25.683086 2545 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:30:25.683272 kubelet[2545]: I1108 00:30:25.683234 2545 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:30:25.683336 kubelet[2545]: I1108 00:30:25.683328 2545 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:30:25.683479 kubelet[2545]: I1108 00:30:25.683468 2545 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:30:25.683537 kubelet[2545]: I1108 00:30:25.683519 2545 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:30:25.683571 kubelet[2545]: I1108 00:30:25.683566 2545 policy_none.go:49] "None policy: Start" Nov 8 00:30:25.683611 kubelet[2545]: I1108 00:30:25.683605 2545 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:30:25.683651 kubelet[2545]: I1108 00:30:25.683645 2545 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:30:25.683762 kubelet[2545]: I1108 00:30:25.683752 2545 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:30:25.683802 kubelet[2545]: I1108 00:30:25.683797 2545 policy_none.go:47] "Start" Nov 8 00:30:25.687890 kubelet[2545]: E1108 00:30:25.687877 2545 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:30:25.688495 kubelet[2545]: I1108 00:30:25.688481 2545 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:30:25.688574 kubelet[2545]: I1108 00:30:25.688551 2545 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:30:25.688870 kubelet[2545]: I1108 00:30:25.688861 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:30:25.691510 kubelet[2545]: E1108 00:30:25.691495 2545 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:30:25.739970 kubelet[2545]: I1108 00:30:25.739942 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.740118 kubelet[2545]: I1108 00:30:25.740097 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.744147 kubelet[2545]: I1108 00:30:25.739947 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.753520 kubelet[2545]: E1108 00:30:25.753490 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.754388 kubelet[2545]: E1108 00:30:25.754356 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.754535 kubelet[2545]: E1108 00:30:25.754503 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.798373 kubelet[2545]: I1108 00:30:25.798316 2545 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.806837 kubelet[2545]: I1108 00:30:25.806797 2545 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.807025 kubelet[2545]: I1108 00:30:25.806862 2545 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933807 kubelet[2545]: I1108 00:30:25.933673 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933807 kubelet[2545]: I1108 00:30:25.933713 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933807 kubelet[2545]: I1108 00:30:25.933742 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5023b177c7296b68519f4204ec63971a-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-6ee8ddef06\" (UID: \"5023b177c7296b68519f4204ec63971a\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933807 kubelet[2545]: I1108 00:30:25.933761 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933807 kubelet[2545]: I1108 00:30:25.933776 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933997 kubelet[2545]: I1108 00:30:25.933791 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933997 kubelet[2545]: I1108 00:30:25.933807 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933997 kubelet[2545]: I1108 00:30:25.933821 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3740f220f7359f63ea0f1e551097223e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" (UID: \"3740f220f7359f63ea0f1e551097223e\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:25.933997 kubelet[2545]: I1108 00:30:25.933837 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df19481ededd2bc80170a80d96b1ee36-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" (UID: \"df19481ededd2bc80170a80d96b1ee36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.601417 kubelet[2545]: I1108 00:30:26.601376 2545 apiserver.go:52] "Watching apiserver" Nov 8 00:30:26.632838 kubelet[2545]: I1108 00:30:26.632769 2545 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:30:26.674294 kubelet[2545]: I1108 00:30:26.671815 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.674294 kubelet[2545]: I1108 00:30:26.672039 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.674294 kubelet[2545]: I1108 00:30:26.672189 2545 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.683023 kubelet[2545]: E1108 00:30:26.682904 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.683305 kubelet[2545]: E1108 00:30:26.683225 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.685708 kubelet[2545]: E1108 00:30:26.685347 2545 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-6ee8ddef06\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" Nov 8 00:30:26.703123 kubelet[2545]: I1108 00:30:26.703047 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-6ee8ddef06" podStartSLOduration=2.70300198 podStartE2EDuration="2.70300198s" podCreationTimestamp="2025-11-08 00:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:26.698206213 +0000 UTC m=+1.192708102" watchObservedRunningTime="2025-11-08 00:30:26.70300198 +0000 UTC m=+1.197503870" Nov 8 00:30:26.710587 kubelet[2545]: I1108 00:30:26.710498 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-6ee8ddef06" podStartSLOduration=2.710475363 podStartE2EDuration="2.710475363s" podCreationTimestamp="2025-11-08 00:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:26.71012722 +0000 UTC m=+1.204629109" watchObservedRunningTime="2025-11-08 00:30:26.710475363 +0000 UTC m=+1.204977281" Nov 8 00:30:31.197416 kubelet[2545]: I1108 00:30:31.197382 2545 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:30:31.198079 kubelet[2545]: I1108 00:30:31.198065 2545 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:30:31.198128 containerd[1500]: time="2025-11-08T00:30:31.197837184Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:30:31.656153 kubelet[2545]: I1108 00:30:31.655998 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-6ee8ddef06" podStartSLOduration=7.655984083 podStartE2EDuration="7.655984083s" podCreationTimestamp="2025-11-08 00:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:26.719665088 +0000 UTC m=+1.214166977" watchObservedRunningTime="2025-11-08 00:30:31.655984083 +0000 UTC m=+6.150485982" Nov 8 00:30:32.204244 systemd[1]: Created slice kubepods-besteffort-poddb82ea12_b8f2_4f23_8fa7_1fb0332f33c2.slice - libcontainer container kubepods-besteffort-poddb82ea12_b8f2_4f23_8fa7_1fb0332f33c2.slice. Nov 8 00:30:32.273772 kubelet[2545]: I1108 00:30:32.273712 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db82ea12-b8f2-4f23-8fa7-1fb0332f33c2-kube-proxy\") pod \"kube-proxy-kcddm\" (UID: \"db82ea12-b8f2-4f23-8fa7-1fb0332f33c2\") " pod="kube-system/kube-proxy-kcddm" Nov 8 00:30:32.273772 kubelet[2545]: I1108 00:30:32.273768 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db82ea12-b8f2-4f23-8fa7-1fb0332f33c2-xtables-lock\") pod \"kube-proxy-kcddm\" (UID: \"db82ea12-b8f2-4f23-8fa7-1fb0332f33c2\") " pod="kube-system/kube-proxy-kcddm" Nov 8 00:30:32.274169 kubelet[2545]: I1108 00:30:32.273813 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rsm\" (UniqueName: \"kubernetes.io/projected/db82ea12-b8f2-4f23-8fa7-1fb0332f33c2-kube-api-access-w9rsm\") pod \"kube-proxy-kcddm\" (UID: \"db82ea12-b8f2-4f23-8fa7-1fb0332f33c2\") " pod="kube-system/kube-proxy-kcddm" Nov 8 00:30:32.274169 kubelet[2545]: I1108 00:30:32.273834 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db82ea12-b8f2-4f23-8fa7-1fb0332f33c2-lib-modules\") pod \"kube-proxy-kcddm\" (UID: \"db82ea12-b8f2-4f23-8fa7-1fb0332f33c2\") " pod="kube-system/kube-proxy-kcddm" Nov 8 00:30:32.408885 systemd[1]: Created slice kubepods-besteffort-podc4b76180_eede_40d6_bd79_dbb471165e01.slice - libcontainer container kubepods-besteffort-podc4b76180_eede_40d6_bd79_dbb471165e01.slice. Nov 8 00:30:32.476166 kubelet[2545]: I1108 00:30:32.476055 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4b76180-eede-40d6-bd79-dbb471165e01-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-vdtdn\" (UID: \"c4b76180-eede-40d6-bd79-dbb471165e01\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vdtdn" Nov 8 00:30:32.476357 kubelet[2545]: I1108 00:30:32.476341 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhjd4\" (UniqueName: \"kubernetes.io/projected/c4b76180-eede-40d6-bd79-dbb471165e01-kube-api-access-nhjd4\") pod \"tigera-operator-65cdcdfd6d-vdtdn\" (UID: \"c4b76180-eede-40d6-bd79-dbb471165e01\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-vdtdn" Nov 8 00:30:32.517050 containerd[1500]: time="2025-11-08T00:30:32.516999548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcddm,Uid:db82ea12-b8f2-4f23-8fa7-1fb0332f33c2,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:32.545067 containerd[1500]: time="2025-11-08T00:30:32.544898721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:32.545067 containerd[1500]: time="2025-11-08T00:30:32.544951314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:32.545067 containerd[1500]: time="2025-11-08T00:30:32.544967040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:32.545067 containerd[1500]: time="2025-11-08T00:30:32.545035209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:32.570443 systemd[1]: Started cri-containerd-decd9d1f0a015d6d28429ad4bcc10d6aff443ed21d4b5785f495d2ac7587d003.scope - libcontainer container decd9d1f0a015d6d28429ad4bcc10d6aff443ed21d4b5785f495d2ac7587d003. Nov 8 00:30:32.596170 containerd[1500]: time="2025-11-08T00:30:32.596133845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kcddm,Uid:db82ea12-b8f2-4f23-8fa7-1fb0332f33c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"decd9d1f0a015d6d28429ad4bcc10d6aff443ed21d4b5785f495d2ac7587d003\"" Nov 8 00:30:32.605653 containerd[1500]: time="2025-11-08T00:30:32.605624634Z" level=info msg="CreateContainer within sandbox \"decd9d1f0a015d6d28429ad4bcc10d6aff443ed21d4b5785f495d2ac7587d003\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:30:32.622235 containerd[1500]: time="2025-11-08T00:30:32.621848416Z" level=info msg="CreateContainer within sandbox \"decd9d1f0a015d6d28429ad4bcc10d6aff443ed21d4b5785f495d2ac7587d003\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f04afc2230e18182bf6412c7e2a8b83145a5d73e13d9bf69b8f4fe53192815c6\"" Nov 8 00:30:32.622469 containerd[1500]: time="2025-11-08T00:30:32.622449322Z" level=info msg="StartContainer for \"f04afc2230e18182bf6412c7e2a8b83145a5d73e13d9bf69b8f4fe53192815c6\"" Nov 8 00:30:32.648394 systemd[1]: Started cri-containerd-f04afc2230e18182bf6412c7e2a8b83145a5d73e13d9bf69b8f4fe53192815c6.scope - libcontainer container f04afc2230e18182bf6412c7e2a8b83145a5d73e13d9bf69b8f4fe53192815c6. Nov 8 00:30:32.671587 containerd[1500]: time="2025-11-08T00:30:32.671388964Z" level=info msg="StartContainer for \"f04afc2230e18182bf6412c7e2a8b83145a5d73e13d9bf69b8f4fe53192815c6\" returns successfully" Nov 8 00:30:32.715447 containerd[1500]: time="2025-11-08T00:30:32.714797435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vdtdn,Uid:c4b76180-eede-40d6-bd79-dbb471165e01,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:30:32.739781 containerd[1500]: time="2025-11-08T00:30:32.739474173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:32.739781 containerd[1500]: time="2025-11-08T00:30:32.739535136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:32.739781 containerd[1500]: time="2025-11-08T00:30:32.739553424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:32.739781 containerd[1500]: time="2025-11-08T00:30:32.739628872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:32.760442 systemd[1]: Started cri-containerd-03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce.scope - libcontainer container 03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce. Nov 8 00:30:32.799805 containerd[1500]: time="2025-11-08T00:30:32.799710655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-vdtdn,Uid:c4b76180-eede-40d6-bd79-dbb471165e01,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce\"" Nov 8 00:30:32.809383 containerd[1500]: time="2025-11-08T00:30:32.809209033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:30:34.870214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890840880.mount: Deactivated successfully. Nov 8 00:30:35.214006 containerd[1500]: time="2025-11-08T00:30:35.213761850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:35.214705 containerd[1500]: time="2025-11-08T00:30:35.214540477Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:30:35.216269 containerd[1500]: time="2025-11-08T00:30:35.215451246Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:35.217176 containerd[1500]: time="2025-11-08T00:30:35.217144086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:35.218067 containerd[1500]: time="2025-11-08T00:30:35.217682090Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.408447539s" Nov 8 00:30:35.218067 containerd[1500]: time="2025-11-08T00:30:35.217708824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:30:35.221404 containerd[1500]: time="2025-11-08T00:30:35.221379519Z" level=info msg="CreateContainer within sandbox \"03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:30:35.244262 containerd[1500]: time="2025-11-08T00:30:35.244215241Z" level=info msg="CreateContainer within sandbox \"03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6\"" Nov 8 00:30:35.244945 containerd[1500]: time="2025-11-08T00:30:35.244899907Z" level=info msg="StartContainer for \"cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6\"" Nov 8 00:30:35.269384 systemd[1]: Started cri-containerd-cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6.scope - libcontainer container cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6. Nov 8 00:30:35.288222 containerd[1500]: time="2025-11-08T00:30:35.288188472Z" level=info msg="StartContainer for \"cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6\" returns successfully" Nov 8 00:30:35.403079 update_engine[1487]: I20251108 00:30:35.402987 1487 update_attempter.cc:509] Updating boot flags... Nov 8 00:30:35.442474 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2901) Nov 8 00:30:35.491274 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2903) Nov 8 00:30:35.719708 kubelet[2545]: I1108 00:30:35.717923 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kcddm" podStartSLOduration=3.71790843 podStartE2EDuration="3.71790843s" podCreationTimestamp="2025-11-08 00:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:30:32.711464849 +0000 UTC m=+7.205966738" watchObservedRunningTime="2025-11-08 00:30:35.71790843 +0000 UTC m=+10.212410319" Nov 8 00:30:35.720238 kubelet[2545]: I1108 00:30:35.720200 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-vdtdn" podStartSLOduration=1.310764376 podStartE2EDuration="3.720189288s" podCreationTimestamp="2025-11-08 00:30:32 +0000 UTC" firstStartedPulling="2025-11-08 00:30:32.808862137 +0000 UTC m=+7.303364036" lastFinishedPulling="2025-11-08 00:30:35.218287059 +0000 UTC m=+9.712788948" observedRunningTime="2025-11-08 00:30:35.717002346 +0000 UTC m=+10.211504246" watchObservedRunningTime="2025-11-08 00:30:35.720189288 +0000 UTC m=+10.214691187" Nov 8 00:30:39.337589 sudo[1708]: pam_unix(sudo:session): session closed for user root Nov 8 00:30:39.520463 sshd[1705]: pam_unix(sshd:session): session closed for user core Nov 8 00:30:39.525384 systemd-logind[1486]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:30:39.527379 systemd[1]: sshd@6-157.180.31.220:22-147.75.109.163:59010.service: Deactivated successfully. Nov 8 00:30:39.529563 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:30:39.529864 systemd[1]: session-7.scope: Consumed 4.990s CPU time, 144.5M memory peak, 0B memory swap peak. Nov 8 00:30:39.531768 systemd-logind[1486]: Removed session 7. Nov 8 00:30:43.631175 systemd[1]: Created slice kubepods-besteffort-pode630cc04_47bc_43c2_8c3e_be04887efb2f.slice - libcontainer container kubepods-besteffort-pode630cc04_47bc_43c2_8c3e_be04887efb2f.slice. Nov 8 00:30:43.654953 kubelet[2545]: I1108 00:30:43.654884 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5v72\" (UniqueName: \"kubernetes.io/projected/e630cc04-47bc-43c2-8c3e-be04887efb2f-kube-api-access-d5v72\") pod \"calico-typha-578797c68d-gxflz\" (UID: \"e630cc04-47bc-43c2-8c3e-be04887efb2f\") " pod="calico-system/calico-typha-578797c68d-gxflz" Nov 8 00:30:43.654953 kubelet[2545]: I1108 00:30:43.654939 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e630cc04-47bc-43c2-8c3e-be04887efb2f-typha-certs\") pod \"calico-typha-578797c68d-gxflz\" (UID: \"e630cc04-47bc-43c2-8c3e-be04887efb2f\") " pod="calico-system/calico-typha-578797c68d-gxflz" Nov 8 00:30:43.655899 kubelet[2545]: I1108 00:30:43.654970 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e630cc04-47bc-43c2-8c3e-be04887efb2f-tigera-ca-bundle\") pod \"calico-typha-578797c68d-gxflz\" (UID: \"e630cc04-47bc-43c2-8c3e-be04887efb2f\") " pod="calico-system/calico-typha-578797c68d-gxflz" Nov 8 00:30:43.869026 systemd[1]: Created slice kubepods-besteffort-pod3a0abe83_7819_4f78_a978_506dfb427e5e.slice - libcontainer container kubepods-besteffort-pod3a0abe83_7819_4f78_a978_506dfb427e5e.slice. Nov 8 00:30:43.938528 containerd[1500]: time="2025-11-08T00:30:43.937992636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578797c68d-gxflz,Uid:e630cc04-47bc-43c2-8c3e-be04887efb2f,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:43.956191 kubelet[2545]: I1108 00:30:43.955937 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-cni-bin-dir\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956191 kubelet[2545]: I1108 00:30:43.955970 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-cni-net-dir\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956191 kubelet[2545]: I1108 00:30:43.955982 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a0abe83-7819-4f78-a978-506dfb427e5e-tigera-ca-bundle\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956191 kubelet[2545]: I1108 00:30:43.955995 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-var-run-calico\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956191 kubelet[2545]: I1108 00:30:43.956009 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-flexvol-driver-host\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956411 kubelet[2545]: I1108 00:30:43.956023 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6579\" (UniqueName: \"kubernetes.io/projected/3a0abe83-7819-4f78-a978-506dfb427e5e-kube-api-access-f6579\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956411 kubelet[2545]: I1108 00:30:43.956036 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-lib-modules\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956411 kubelet[2545]: I1108 00:30:43.956047 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3a0abe83-7819-4f78-a978-506dfb427e5e-node-certs\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956411 kubelet[2545]: I1108 00:30:43.956061 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-cni-log-dir\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956411 kubelet[2545]: I1108 00:30:43.956072 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-policysync\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956494 kubelet[2545]: I1108 00:30:43.956084 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-var-lib-calico\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.956494 kubelet[2545]: I1108 00:30:43.956098 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a0abe83-7819-4f78-a978-506dfb427e5e-xtables-lock\") pod \"calico-node-fbzgm\" (UID: \"3a0abe83-7819-4f78-a978-506dfb427e5e\") " pod="calico-system/calico-node-fbzgm" Nov 8 00:30:43.970217 containerd[1500]: time="2025-11-08T00:30:43.969789698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:43.970217 containerd[1500]: time="2025-11-08T00:30:43.969868367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:43.970217 containerd[1500]: time="2025-11-08T00:30:43.969881532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:43.970217 containerd[1500]: time="2025-11-08T00:30:43.969971053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.007554 systemd[1]: Started cri-containerd-fdeedd4190487e46f1f3833ab2c8cef2ea8eb4f9c6b7220f62d82ffc1b606f5d.scope - libcontainer container fdeedd4190487e46f1f3833ab2c8cef2ea8eb4f9c6b7220f62d82ffc1b606f5d. Nov 8 00:30:44.045548 containerd[1500]: time="2025-11-08T00:30:44.045497867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-578797c68d-gxflz,Uid:e630cc04-47bc-43c2-8c3e-be04887efb2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fdeedd4190487e46f1f3833ab2c8cef2ea8eb4f9c6b7220f62d82ffc1b606f5d\"" Nov 8 00:30:44.047384 containerd[1500]: time="2025-11-08T00:30:44.047276607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:30:44.061757 kubelet[2545]: E1108 00:30:44.061605 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.061757 kubelet[2545]: W1108 00:30:44.061621 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.061757 kubelet[2545]: E1108 00:30:44.061636 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.062089 kubelet[2545]: E1108 00:30:44.062036 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.062089 kubelet[2545]: W1108 00:30:44.062051 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.062089 kubelet[2545]: E1108 00:30:44.062060 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.063488 kubelet[2545]: E1108 00:30:44.063470 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.063488 kubelet[2545]: W1108 00:30:44.063483 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.063548 kubelet[2545]: E1108 00:30:44.063493 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.071493 kubelet[2545]: E1108 00:30:44.071289 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.071493 kubelet[2545]: W1108 00:30:44.071328 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.071493 kubelet[2545]: E1108 00:30:44.071340 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.072004 kubelet[2545]: E1108 00:30:44.071989 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.072039 kubelet[2545]: W1108 00:30:44.072005 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.072039 kubelet[2545]: E1108 00:30:44.072019 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.161408 kubelet[2545]: E1108 00:30:44.161348 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:44.175508 containerd[1500]: time="2025-11-08T00:30:44.175464827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fbzgm,Uid:3a0abe83-7819-4f78-a978-506dfb427e5e,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:44.214484 containerd[1500]: time="2025-11-08T00:30:44.214104355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:30:44.214484 containerd[1500]: time="2025-11-08T00:30:44.214197343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:30:44.214484 containerd[1500]: time="2025-11-08T00:30:44.214210909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.214751 containerd[1500]: time="2025-11-08T00:30:44.214629657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:30:44.238462 systemd[1]: Started cri-containerd-7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a.scope - libcontainer container 7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a. Nov 8 00:30:44.242060 kubelet[2545]: E1108 00:30:44.242001 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.242060 kubelet[2545]: W1108 00:30:44.242018 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.242060 kubelet[2545]: E1108 00:30:44.242037 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.242967 kubelet[2545]: E1108 00:30:44.242783 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.242967 kubelet[2545]: W1108 00:30:44.242794 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.242967 kubelet[2545]: E1108 00:30:44.242806 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.244102 kubelet[2545]: E1108 00:30:44.243484 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.244102 kubelet[2545]: W1108 00:30:44.243517 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.244102 kubelet[2545]: E1108 00:30:44.243532 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.244898 kubelet[2545]: E1108 00:30:44.244353 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.244898 kubelet[2545]: W1108 00:30:44.244366 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.244898 kubelet[2545]: E1108 00:30:44.244395 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.245399 kubelet[2545]: E1108 00:30:44.245276 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.245399 kubelet[2545]: W1108 00:30:44.245287 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.245399 kubelet[2545]: E1108 00:30:44.245301 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.245711 kubelet[2545]: E1108 00:30:44.245593 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.245711 kubelet[2545]: W1108 00:30:44.245603 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.245711 kubelet[2545]: E1108 00:30:44.245613 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.246216 kubelet[2545]: E1108 00:30:44.245902 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.246216 kubelet[2545]: W1108 00:30:44.245912 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.246216 kubelet[2545]: E1108 00:30:44.245922 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.246604 kubelet[2545]: E1108 00:30:44.246542 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.246604 kubelet[2545]: W1108 00:30:44.246551 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.246604 kubelet[2545]: E1108 00:30:44.246561 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.247287 kubelet[2545]: E1108 00:30:44.247094 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.247287 kubelet[2545]: W1108 00:30:44.247106 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.247287 kubelet[2545]: E1108 00:30:44.247141 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.247532 kubelet[2545]: E1108 00:30:44.247513 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.247532 kubelet[2545]: W1108 00:30:44.247530 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.247641 kubelet[2545]: E1108 00:30:44.247541 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.247900 kubelet[2545]: E1108 00:30:44.247878 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.247900 kubelet[2545]: W1108 00:30:44.247887 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.247900 kubelet[2545]: E1108 00:30:44.247896 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.248567 kubelet[2545]: E1108 00:30:44.248552 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.248567 kubelet[2545]: W1108 00:30:44.248564 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.248677 kubelet[2545]: E1108 00:30:44.248573 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.248716 kubelet[2545]: E1108 00:30:44.248703 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.248716 kubelet[2545]: W1108 00:30:44.248713 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.248773 kubelet[2545]: E1108 00:30:44.248720 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.248844 kubelet[2545]: E1108 00:30:44.248825 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.248844 kubelet[2545]: W1108 00:30:44.248832 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.248844 kubelet[2545]: E1108 00:30:44.248838 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.249097 kubelet[2545]: E1108 00:30:44.249035 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.249097 kubelet[2545]: W1108 00:30:44.249044 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.249097 kubelet[2545]: E1108 00:30:44.249055 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.249590 kubelet[2545]: E1108 00:30:44.249567 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.249590 kubelet[2545]: W1108 00:30:44.249583 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.249834 kubelet[2545]: E1108 00:30:44.249594 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.250348 kubelet[2545]: E1108 00:30:44.250330 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.250348 kubelet[2545]: W1108 00:30:44.250346 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.250537 kubelet[2545]: E1108 00:30:44.250357 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.250537 kubelet[2545]: E1108 00:30:44.250511 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.250537 kubelet[2545]: W1108 00:30:44.250519 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.250537 kubelet[2545]: E1108 00:30:44.250528 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.250740 kubelet[2545]: E1108 00:30:44.250664 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.250740 kubelet[2545]: W1108 00:30:44.250674 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.250740 kubelet[2545]: E1108 00:30:44.250683 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.252398 kubelet[2545]: E1108 00:30:44.252377 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.252398 kubelet[2545]: W1108 00:30:44.252394 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.252600 kubelet[2545]: E1108 00:30:44.252405 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.258462 kubelet[2545]: E1108 00:30:44.258321 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.258462 kubelet[2545]: W1108 00:30:44.258333 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.258462 kubelet[2545]: E1108 00:30:44.258345 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.258462 kubelet[2545]: I1108 00:30:44.258380 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2bf49-2c83-4e41-9990-a77826efb954-registration-dir\") pod \"csi-node-driver-4mxdr\" (UID: \"f0c2bf49-2c83-4e41-9990-a77826efb954\") " pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:44.258819 kubelet[2545]: E1108 00:30:44.258656 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.258819 kubelet[2545]: W1108 00:30:44.258689 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.258819 kubelet[2545]: E1108 00:30:44.258700 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.258819 kubelet[2545]: I1108 00:30:44.258716 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2bf49-2c83-4e41-9990-a77826efb954-kubelet-dir\") pod \"csi-node-driver-4mxdr\" (UID: \"f0c2bf49-2c83-4e41-9990-a77826efb954\") " pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:44.259011 kubelet[2545]: E1108 00:30:44.258928 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.259011 kubelet[2545]: W1108 00:30:44.258938 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.259011 kubelet[2545]: E1108 00:30:44.258946 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.259011 kubelet[2545]: I1108 00:30:44.258960 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0c2bf49-2c83-4e41-9990-a77826efb954-varrun\") pod \"csi-node-driver-4mxdr\" (UID: \"f0c2bf49-2c83-4e41-9990-a77826efb954\") " pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:44.259343 kubelet[2545]: E1108 00:30:44.259236 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.259343 kubelet[2545]: W1108 00:30:44.259265 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.259343 kubelet[2545]: E1108 00:30:44.259278 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.259343 kubelet[2545]: I1108 00:30:44.259290 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95m6\" (UniqueName: \"kubernetes.io/projected/f0c2bf49-2c83-4e41-9990-a77826efb954-kube-api-access-p95m6\") pod \"csi-node-driver-4mxdr\" (UID: \"f0c2bf49-2c83-4e41-9990-a77826efb954\") " pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:44.259677 kubelet[2545]: E1108 00:30:44.259568 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.259677 kubelet[2545]: W1108 00:30:44.259577 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.259677 kubelet[2545]: E1108 00:30:44.259585 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.259677 kubelet[2545]: I1108 00:30:44.259596 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0c2bf49-2c83-4e41-9990-a77826efb954-socket-dir\") pod \"csi-node-driver-4mxdr\" (UID: \"f0c2bf49-2c83-4e41-9990-a77826efb954\") " pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:44.261008 kubelet[2545]: E1108 00:30:44.260909 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.261008 kubelet[2545]: W1108 00:30:44.260919 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.261008 kubelet[2545]: E1108 00:30:44.260928 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.261274 kubelet[2545]: E1108 00:30:44.261146 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.261274 kubelet[2545]: W1108 00:30:44.261155 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.261274 kubelet[2545]: E1108 00:30:44.261164 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.261400 kubelet[2545]: E1108 00:30:44.261391 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.261442 kubelet[2545]: W1108 00:30:44.261434 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.261484 kubelet[2545]: E1108 00:30:44.261476 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.261786 kubelet[2545]: E1108 00:30:44.261777 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262144 kubelet[2545]: W1108 00:30:44.261847 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262144 kubelet[2545]: E1108 00:30:44.261858 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.262144 kubelet[2545]: E1108 00:30:44.261988 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262144 kubelet[2545]: W1108 00:30:44.261994 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262144 kubelet[2545]: E1108 00:30:44.262001 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.262276 kubelet[2545]: E1108 00:30:44.262152 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262276 kubelet[2545]: W1108 00:30:44.262161 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262276 kubelet[2545]: E1108 00:30:44.262170 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.262432 kubelet[2545]: E1108 00:30:44.262346 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262432 kubelet[2545]: W1108 00:30:44.262356 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262432 kubelet[2545]: E1108 00:30:44.262363 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.262619 kubelet[2545]: E1108 00:30:44.262529 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262619 kubelet[2545]: W1108 00:30:44.262536 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262619 kubelet[2545]: E1108 00:30:44.262544 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.262704 kubelet[2545]: E1108 00:30:44.262694 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.262812 kubelet[2545]: W1108 00:30:44.262754 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.262812 kubelet[2545]: E1108 00:30:44.262766 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.263530 kubelet[2545]: E1108 00:30:44.263319 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.263530 kubelet[2545]: W1108 00:30:44.263329 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.263530 kubelet[2545]: E1108 00:30:44.263337 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.271939 containerd[1500]: time="2025-11-08T00:30:44.271908610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fbzgm,Uid:3a0abe83-7819-4f78-a978-506dfb427e5e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\"" Nov 8 00:30:44.361449 kubelet[2545]: E1108 00:30:44.361412 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.362052 kubelet[2545]: W1108 00:30:44.361898 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.362052 kubelet[2545]: E1108 00:30:44.361937 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.363232 kubelet[2545]: E1108 00:30:44.362897 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.363232 kubelet[2545]: W1108 00:30:44.362918 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.363232 kubelet[2545]: E1108 00:30:44.363039 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.364160 kubelet[2545]: E1108 00:30:44.363823 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.364160 kubelet[2545]: W1108 00:30:44.363842 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.364160 kubelet[2545]: E1108 00:30:44.363863 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.365514 kubelet[2545]: E1108 00:30:44.365364 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.365514 kubelet[2545]: W1108 00:30:44.365384 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.365514 kubelet[2545]: E1108 00:30:44.365401 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.366728 kubelet[2545]: E1108 00:30:44.366561 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.366728 kubelet[2545]: W1108 00:30:44.366589 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.366728 kubelet[2545]: E1108 00:30:44.366613 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.368454 kubelet[2545]: E1108 00:30:44.368212 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.368454 kubelet[2545]: W1108 00:30:44.368238 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.368454 kubelet[2545]: E1108 00:30:44.368306 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.369390 kubelet[2545]: E1108 00:30:44.368924 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.369390 kubelet[2545]: W1108 00:30:44.368945 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.369390 kubelet[2545]: E1108 00:30:44.368965 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.371122 kubelet[2545]: E1108 00:30:44.370320 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.371122 kubelet[2545]: W1108 00:30:44.370346 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.371122 kubelet[2545]: E1108 00:30:44.370368 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.373204 kubelet[2545]: E1108 00:30:44.372929 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.373204 kubelet[2545]: W1108 00:30:44.372996 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.373204 kubelet[2545]: E1108 00:30:44.373020 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.374829 kubelet[2545]: E1108 00:30:44.374477 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.374829 kubelet[2545]: W1108 00:30:44.374536 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.374829 kubelet[2545]: E1108 00:30:44.374559 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.381352 kubelet[2545]: E1108 00:30:44.381308 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.382872 kubelet[2545]: W1108 00:30:44.382429 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.382872 kubelet[2545]: E1108 00:30:44.382478 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.384521 kubelet[2545]: E1108 00:30:44.383634 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.384521 kubelet[2545]: W1108 00:30:44.383660 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.384521 kubelet[2545]: E1108 00:30:44.384456 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.386869 kubelet[2545]: E1108 00:30:44.386618 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.386869 kubelet[2545]: W1108 00:30:44.386664 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.386869 kubelet[2545]: E1108 00:30:44.386691 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.391344 kubelet[2545]: E1108 00:30:44.391322 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.391499 kubelet[2545]: W1108 00:30:44.391477 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.391642 kubelet[2545]: E1108 00:30:44.391623 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.392622 kubelet[2545]: E1108 00:30:44.392564 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.392622 kubelet[2545]: W1108 00:30:44.392581 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.392879 kubelet[2545]: E1108 00:30:44.392725 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.393466 kubelet[2545]: E1108 00:30:44.393341 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.393466 kubelet[2545]: W1108 00:30:44.393359 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.393466 kubelet[2545]: E1108 00:30:44.393375 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.395317 kubelet[2545]: E1108 00:30:44.393886 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.395317 kubelet[2545]: W1108 00:30:44.393900 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.395549 kubelet[2545]: E1108 00:30:44.395406 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.396213 kubelet[2545]: E1108 00:30:44.396090 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.396213 kubelet[2545]: W1108 00:30:44.396147 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.396213 kubelet[2545]: E1108 00:30:44.396166 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.396956 kubelet[2545]: E1108 00:30:44.396790 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.396956 kubelet[2545]: W1108 00:30:44.396829 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.396956 kubelet[2545]: E1108 00:30:44.396844 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.397647 kubelet[2545]: E1108 00:30:44.397319 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.397647 kubelet[2545]: W1108 00:30:44.397333 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.397647 kubelet[2545]: E1108 00:30:44.397559 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.398399 kubelet[2545]: E1108 00:30:44.398041 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.398399 kubelet[2545]: W1108 00:30:44.398055 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.398399 kubelet[2545]: E1108 00:30:44.398068 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.399025 kubelet[2545]: E1108 00:30:44.398895 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.399025 kubelet[2545]: W1108 00:30:44.398910 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.399025 kubelet[2545]: E1108 00:30:44.398922 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.399634 kubelet[2545]: E1108 00:30:44.399465 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.399634 kubelet[2545]: W1108 00:30:44.399506 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.399634 kubelet[2545]: E1108 00:30:44.399519 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.401014 kubelet[2545]: E1108 00:30:44.400491 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.401014 kubelet[2545]: W1108 00:30:44.400506 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.401014 kubelet[2545]: E1108 00:30:44.400545 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.401014 kubelet[2545]: E1108 00:30:44.400838 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.401014 kubelet[2545]: W1108 00:30:44.400849 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.401014 kubelet[2545]: E1108 00:30:44.400860 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:44.415049 kubelet[2545]: E1108 00:30:44.415009 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:44.415335 kubelet[2545]: W1108 00:30:44.415212 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:44.415335 kubelet[2545]: E1108 00:30:44.415241 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:45.640511 kubelet[2545]: E1108 00:30:45.640077 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:45.961672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount821208719.mount: Deactivated successfully. Nov 8 00:30:47.201188 containerd[1500]: time="2025-11-08T00:30:47.201145585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.201952 containerd[1500]: time="2025-11-08T00:30:47.201830247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:30:47.202698 containerd[1500]: time="2025-11-08T00:30:47.202656026Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.209207 containerd[1500]: time="2025-11-08T00:30:47.208719898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:47.209207 containerd[1500]: time="2025-11-08T00:30:47.209124989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.160797328s" Nov 8 00:30:47.209207 containerd[1500]: time="2025-11-08T00:30:47.209145487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:30:47.210314 containerd[1500]: time="2025-11-08T00:30:47.210293091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:30:47.223611 containerd[1500]: time="2025-11-08T00:30:47.223570800Z" level=info msg="CreateContainer within sandbox \"fdeedd4190487e46f1f3833ab2c8cef2ea8eb4f9c6b7220f62d82ffc1b606f5d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:30:47.234707 containerd[1500]: time="2025-11-08T00:30:47.234677062Z" level=info msg="CreateContainer within sandbox \"fdeedd4190487e46f1f3833ab2c8cef2ea8eb4f9c6b7220f62d82ffc1b606f5d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"30c6ae5d261790fd6acbee590f77c2be6d68d31b5623c505a65b10bb786b4f00\"" Nov 8 00:30:47.235181 containerd[1500]: time="2025-11-08T00:30:47.235082232Z" level=info msg="StartContainer for \"30c6ae5d261790fd6acbee590f77c2be6d68d31b5623c505a65b10bb786b4f00\"" Nov 8 00:30:47.263359 systemd[1]: Started cri-containerd-30c6ae5d261790fd6acbee590f77c2be6d68d31b5623c505a65b10bb786b4f00.scope - libcontainer container 30c6ae5d261790fd6acbee590f77c2be6d68d31b5623c505a65b10bb786b4f00. Nov 8 00:30:47.298618 containerd[1500]: time="2025-11-08T00:30:47.298576268Z" level=info msg="StartContainer for \"30c6ae5d261790fd6acbee590f77c2be6d68d31b5623c505a65b10bb786b4f00\" returns successfully" Nov 8 00:30:47.642051 kubelet[2545]: E1108 00:30:47.641999 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:47.739311 kubelet[2545]: I1108 00:30:47.739130 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-578797c68d-gxflz" podStartSLOduration=1.575950905 podStartE2EDuration="4.739114801s" podCreationTimestamp="2025-11-08 00:30:43 +0000 UTC" firstStartedPulling="2025-11-08 00:30:44.04669614 +0000 UTC m=+18.541198030" lastFinishedPulling="2025-11-08 00:30:47.209860027 +0000 UTC m=+21.704361926" observedRunningTime="2025-11-08 00:30:47.738765487 +0000 UTC m=+22.233267386" watchObservedRunningTime="2025-11-08 00:30:47.739114801 +0000 UTC m=+22.233616700" Nov 8 00:30:47.775726 kubelet[2545]: E1108 00:30:47.775674 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.775726 kubelet[2545]: W1108 00:30:47.775714 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.775881 kubelet[2545]: E1108 00:30:47.775759 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.776261 kubelet[2545]: E1108 00:30:47.776075 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.776261 kubelet[2545]: W1108 00:30:47.776105 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.776261 kubelet[2545]: E1108 00:30:47.776128 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.776564 kubelet[2545]: E1108 00:30:47.776461 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.776564 kubelet[2545]: W1108 00:30:47.776472 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.776564 kubelet[2545]: E1108 00:30:47.776482 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.776701 kubelet[2545]: E1108 00:30:47.776691 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.776797 kubelet[2545]: W1108 00:30:47.776758 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.776797 kubelet[2545]: E1108 00:30:47.776773 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.776939 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.777329 kubelet[2545]: W1108 00:30:47.776946 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.776959 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.777071 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.777329 kubelet[2545]: W1108 00:30:47.777077 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.777083 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.777188 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.777329 kubelet[2545]: W1108 00:30:47.777194 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.777201 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.777329 kubelet[2545]: E1108 00:30:47.777340 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.778073 kubelet[2545]: W1108 00:30:47.777347 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777355 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777471 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.778073 kubelet[2545]: W1108 00:30:47.777478 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777484 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777713 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.778073 kubelet[2545]: W1108 00:30:47.777719 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777726 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.778073 kubelet[2545]: E1108 00:30:47.777950 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.778073 kubelet[2545]: W1108 00:30:47.777970 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.777992 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778467 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.779313 kubelet[2545]: W1108 00:30:47.778475 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778490 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778673 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.779313 kubelet[2545]: W1108 00:30:47.778680 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778687 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778839 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.779313 kubelet[2545]: W1108 00:30:47.778846 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.779313 kubelet[2545]: E1108 00:30:47.778852 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.779521 kubelet[2545]: E1108 00:30:47.779075 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.779521 kubelet[2545]: W1108 00:30:47.779104 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.779521 kubelet[2545]: E1108 00:30:47.779164 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.802495 kubelet[2545]: E1108 00:30:47.802454 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.802495 kubelet[2545]: W1108 00:30:47.802489 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.802626 kubelet[2545]: E1108 00:30:47.802524 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.803398 kubelet[2545]: E1108 00:30:47.803174 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.803398 kubelet[2545]: W1108 00:30:47.803191 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.803398 kubelet[2545]: E1108 00:30:47.803205 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.803742 kubelet[2545]: E1108 00:30:47.803620 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.803742 kubelet[2545]: W1108 00:30:47.803633 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.803742 kubelet[2545]: E1108 00:30:47.803643 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.803987 kubelet[2545]: E1108 00:30:47.803977 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.804048 kubelet[2545]: W1108 00:30:47.804037 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.804148 kubelet[2545]: E1108 00:30:47.804092 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.804524 kubelet[2545]: E1108 00:30:47.804405 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.804524 kubelet[2545]: W1108 00:30:47.804437 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.804524 kubelet[2545]: E1108 00:30:47.804448 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.804859 kubelet[2545]: E1108 00:30:47.804820 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.804859 kubelet[2545]: W1108 00:30:47.804852 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.804926 kubelet[2545]: E1108 00:30:47.804881 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.805386 kubelet[2545]: E1108 00:30:47.805344 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.805428 kubelet[2545]: W1108 00:30:47.805381 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.805428 kubelet[2545]: E1108 00:30:47.805410 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.805808 kubelet[2545]: E1108 00:30:47.805771 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.805808 kubelet[2545]: W1108 00:30:47.805802 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.805871 kubelet[2545]: E1108 00:30:47.805846 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.806232 kubelet[2545]: E1108 00:30:47.806213 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.806232 kubelet[2545]: W1108 00:30:47.806229 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.806328 kubelet[2545]: E1108 00:30:47.806285 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.806602 kubelet[2545]: E1108 00:30:47.806588 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.806652 kubelet[2545]: W1108 00:30:47.806605 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.806652 kubelet[2545]: E1108 00:30:47.806613 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.806781 kubelet[2545]: E1108 00:30:47.806757 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.806781 kubelet[2545]: W1108 00:30:47.806774 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.806855 kubelet[2545]: E1108 00:30:47.806784 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.807009 kubelet[2545]: E1108 00:30:47.806982 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.807009 kubelet[2545]: W1108 00:30:47.806999 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.807009 kubelet[2545]: E1108 00:30:47.807010 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.807423 kubelet[2545]: E1108 00:30:47.807400 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.807423 kubelet[2545]: W1108 00:30:47.807415 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.807500 kubelet[2545]: E1108 00:30:47.807425 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.807753 kubelet[2545]: E1108 00:30:47.807634 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.807753 kubelet[2545]: W1108 00:30:47.807646 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.807753 kubelet[2545]: E1108 00:30:47.807656 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.807900 kubelet[2545]: E1108 00:30:47.807889 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.807976 kubelet[2545]: W1108 00:30:47.807948 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.807976 kubelet[2545]: E1108 00:30:47.807966 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.808192 kubelet[2545]: E1108 00:30:47.808168 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.808192 kubelet[2545]: W1108 00:30:47.808185 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.808340 kubelet[2545]: E1108 00:30:47.808195 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.808473 kubelet[2545]: E1108 00:30:47.808450 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.808473 kubelet[2545]: W1108 00:30:47.808468 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.808540 kubelet[2545]: E1108 00:30:47.808485 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:47.809273 kubelet[2545]: E1108 00:30:47.809217 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:47.809327 kubelet[2545]: W1108 00:30:47.809235 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:47.809327 kubelet[2545]: E1108 00:30:47.809306 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.731480 kubelet[2545]: I1108 00:30:48.731412 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:30:48.785612 kubelet[2545]: E1108 00:30:48.785571 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.785730 kubelet[2545]: W1108 00:30:48.785605 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.785730 kubelet[2545]: E1108 00:30:48.785649 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.785929 kubelet[2545]: E1108 00:30:48.785898 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.785929 kubelet[2545]: W1108 00:30:48.785911 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.785929 kubelet[2545]: E1108 00:30:48.785922 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.786562 kubelet[2545]: E1108 00:30:48.786091 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.786562 kubelet[2545]: W1108 00:30:48.786098 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.786562 kubelet[2545]: E1108 00:30:48.786106 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.786562 kubelet[2545]: E1108 00:30:48.786331 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.786562 kubelet[2545]: W1108 00:30:48.786338 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.786562 kubelet[2545]: E1108 00:30:48.786346 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.786664 kubelet[2545]: E1108 00:30:48.786627 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.786664 kubelet[2545]: W1108 00:30:48.786635 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.786664 kubelet[2545]: E1108 00:30:48.786644 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.786934 kubelet[2545]: E1108 00:30:48.786918 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.786934 kubelet[2545]: W1108 00:30:48.786929 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.786989 kubelet[2545]: E1108 00:30:48.786937 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.787342 kubelet[2545]: E1108 00:30:48.787326 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.787342 kubelet[2545]: W1108 00:30:48.787337 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.787399 kubelet[2545]: E1108 00:30:48.787344 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.787557 kubelet[2545]: E1108 00:30:48.787536 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.787596 kubelet[2545]: W1108 00:30:48.787574 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.787596 kubelet[2545]: E1108 00:30:48.787583 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.787782 kubelet[2545]: E1108 00:30:48.787765 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.787782 kubelet[2545]: W1108 00:30:48.787776 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.787782 kubelet[2545]: E1108 00:30:48.787783 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.788135 kubelet[2545]: E1108 00:30:48.788117 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.788135 kubelet[2545]: W1108 00:30:48.788129 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.788197 kubelet[2545]: E1108 00:30:48.788137 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.788408 kubelet[2545]: E1108 00:30:48.788292 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.788408 kubelet[2545]: W1108 00:30:48.788301 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.788408 kubelet[2545]: E1108 00:30:48.788307 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.788577 kubelet[2545]: E1108 00:30:48.788546 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.788577 kubelet[2545]: W1108 00:30:48.788553 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.788577 kubelet[2545]: E1108 00:30:48.788561 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.788979 kubelet[2545]: E1108 00:30:48.788742 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.788979 kubelet[2545]: W1108 00:30:48.788754 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.788979 kubelet[2545]: E1108 00:30:48.788763 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.788979 kubelet[2545]: E1108 00:30:48.788954 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.788979 kubelet[2545]: W1108 00:30:48.788962 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.788979 kubelet[2545]: E1108 00:30:48.788969 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.789305 kubelet[2545]: E1108 00:30:48.789122 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.789305 kubelet[2545]: W1108 00:30:48.789130 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.789305 kubelet[2545]: E1108 00:30:48.789137 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812061 kubelet[2545]: E1108 00:30:48.812033 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812061 kubelet[2545]: W1108 00:30:48.812047 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812061 kubelet[2545]: E1108 00:30:48.812059 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812280 kubelet[2545]: E1108 00:30:48.812205 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812280 kubelet[2545]: W1108 00:30:48.812212 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812280 kubelet[2545]: E1108 00:30:48.812219 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812381 kubelet[2545]: E1108 00:30:48.812362 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812381 kubelet[2545]: W1108 00:30:48.812369 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812381 kubelet[2545]: E1108 00:30:48.812375 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812603 kubelet[2545]: E1108 00:30:48.812541 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812603 kubelet[2545]: W1108 00:30:48.812547 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812603 kubelet[2545]: E1108 00:30:48.812554 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812719 kubelet[2545]: E1108 00:30:48.812705 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812719 kubelet[2545]: W1108 00:30:48.812711 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812719 kubelet[2545]: E1108 00:30:48.812718 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.812877 kubelet[2545]: E1108 00:30:48.812855 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.812877 kubelet[2545]: W1108 00:30:48.812867 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.812877 kubelet[2545]: E1108 00:30:48.812874 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.813326 kubelet[2545]: E1108 00:30:48.813308 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.813326 kubelet[2545]: W1108 00:30:48.813321 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.813409 kubelet[2545]: E1108 00:30:48.813331 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.813846 kubelet[2545]: E1108 00:30:48.813713 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.813846 kubelet[2545]: W1108 00:30:48.813722 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.813846 kubelet[2545]: E1108 00:30:48.813730 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.813938 kubelet[2545]: E1108 00:30:48.813924 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.813938 kubelet[2545]: W1108 00:30:48.813931 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.813988 kubelet[2545]: E1108 00:30:48.813950 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.814162 kubelet[2545]: E1108 00:30:48.814143 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.814162 kubelet[2545]: W1108 00:30:48.814155 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.814162 kubelet[2545]: E1108 00:30:48.814163 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.814408 kubelet[2545]: E1108 00:30:48.814357 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.814408 kubelet[2545]: W1108 00:30:48.814366 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.814408 kubelet[2545]: E1108 00:30:48.814373 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.814548 kubelet[2545]: E1108 00:30:48.814525 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.814548 kubelet[2545]: W1108 00:30:48.814540 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.814548 kubelet[2545]: E1108 00:30:48.814547 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.814731 kubelet[2545]: E1108 00:30:48.814711 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.814731 kubelet[2545]: W1108 00:30:48.814725 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.814792 kubelet[2545]: E1108 00:30:48.814738 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.815030 kubelet[2545]: E1108 00:30:48.814929 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.815030 kubelet[2545]: W1108 00:30:48.814938 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.815030 kubelet[2545]: E1108 00:30:48.814946 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.815221 kubelet[2545]: E1108 00:30:48.815142 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.815221 kubelet[2545]: W1108 00:30:48.815151 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.815221 kubelet[2545]: E1108 00:30:48.815158 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.815614 kubelet[2545]: E1108 00:30:48.815465 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.815614 kubelet[2545]: W1108 00:30:48.815475 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.815614 kubelet[2545]: E1108 00:30:48.815482 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.815928 kubelet[2545]: E1108 00:30:48.815842 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.815928 kubelet[2545]: W1108 00:30:48.815850 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.815928 kubelet[2545]: E1108 00:30:48.815858 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:48.816140 kubelet[2545]: E1108 00:30:48.816096 2545 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:30:48.816140 kubelet[2545]: W1108 00:30:48.816107 2545 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:30:48.816140 kubelet[2545]: E1108 00:30:48.816115 2545 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:30:49.116959 containerd[1500]: time="2025-11-08T00:30:49.116881113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:49.119790 containerd[1500]: time="2025-11-08T00:30:49.119515977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:30:49.124830 containerd[1500]: time="2025-11-08T00:30:49.124568922Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:49.128595 containerd[1500]: time="2025-11-08T00:30:49.128544162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:49.129575 containerd[1500]: time="2025-11-08T00:30:49.129433721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.919111726s" Nov 8 00:30:49.129575 containerd[1500]: time="2025-11-08T00:30:49.129474939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:30:49.151383 containerd[1500]: time="2025-11-08T00:30:49.151347347Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:30:49.192314 containerd[1500]: time="2025-11-08T00:30:49.192133212Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679\"" Nov 8 00:30:49.193970 containerd[1500]: time="2025-11-08T00:30:49.193278448Z" level=info msg="StartContainer for \"ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679\"" Nov 8 00:30:49.269784 systemd[1]: Started cri-containerd-ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679.scope - libcontainer container ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679. Nov 8 00:30:49.297781 containerd[1500]: time="2025-11-08T00:30:49.297706585Z" level=info msg="StartContainer for \"ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679\" returns successfully" Nov 8 00:30:49.317562 systemd[1]: cri-containerd-ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679.scope: Deactivated successfully. Nov 8 00:30:49.374447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679-rootfs.mount: Deactivated successfully. Nov 8 00:30:49.427024 containerd[1500]: time="2025-11-08T00:30:49.390652456Z" level=info msg="shim disconnected" id=ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679 namespace=k8s.io Nov 8 00:30:49.427024 containerd[1500]: time="2025-11-08T00:30:49.427020231Z" level=warning msg="cleaning up after shim disconnected" id=ca3ac656cc6ae31d28b3f9d7fb1232081f5640c519ca840cec1aa17ff53a4679 namespace=k8s.io Nov 8 00:30:49.427209 containerd[1500]: time="2025-11-08T00:30:49.427038797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:49.642547 kubelet[2545]: E1108 00:30:49.640500 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:49.739398 containerd[1500]: time="2025-11-08T00:30:49.739274987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:30:51.639828 kubelet[2545]: E1108 00:30:51.639742 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:53.639570 kubelet[2545]: E1108 00:30:53.639534 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:53.989942 containerd[1500]: time="2025-11-08T00:30:53.989826524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.990854 containerd[1500]: time="2025-11-08T00:30:53.990819176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:30:53.991615 containerd[1500]: time="2025-11-08T00:30:53.991574586Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.993124 containerd[1500]: time="2025-11-08T00:30:53.993092182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:30:53.993804 containerd[1500]: time="2025-11-08T00:30:53.993593131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.254199068s" Nov 8 00:30:53.993804 containerd[1500]: time="2025-11-08T00:30:53.993619141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:30:53.997332 containerd[1500]: time="2025-11-08T00:30:53.997232376Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:30:54.021624 containerd[1500]: time="2025-11-08T00:30:54.021565894Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1\"" Nov 8 00:30:54.022332 containerd[1500]: time="2025-11-08T00:30:54.022307749Z" level=info msg="StartContainer for \"b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1\"" Nov 8 00:30:54.053546 systemd[1]: Started cri-containerd-b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1.scope - libcontainer container b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1. Nov 8 00:30:54.081292 containerd[1500]: time="2025-11-08T00:30:54.081241098Z" level=info msg="StartContainer for \"b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1\" returns successfully" Nov 8 00:30:54.456264 systemd[1]: cri-containerd-b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1.scope: Deactivated successfully. Nov 8 00:30:54.479259 kubelet[2545]: I1108 00:30:54.479218 2545 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:30:54.487571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1-rootfs.mount: Deactivated successfully. Nov 8 00:30:54.515577 containerd[1500]: time="2025-11-08T00:30:54.515380785Z" level=info msg="shim disconnected" id=b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1 namespace=k8s.io Nov 8 00:30:54.515776 containerd[1500]: time="2025-11-08T00:30:54.515687927Z" level=warning msg="cleaning up after shim disconnected" id=b26b129b079a2485e0161d4179f33c90a8e7fd284e8c396a66b8d8c9ff5e1dd1 namespace=k8s.io Nov 8 00:30:54.515776 containerd[1500]: time="2025-11-08T00:30:54.515703957Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:30:54.536915 containerd[1500]: time="2025-11-08T00:30:54.536866055Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:30:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:30:54.579156 systemd[1]: Created slice kubepods-burstable-pod69a2d468_d7b2_4842_a119_55b88cf0a542.slice - libcontainer container kubepods-burstable-pod69a2d468_d7b2_4842_a119_55b88cf0a542.slice. Nov 8 00:30:54.585128 systemd[1]: Created slice kubepods-burstable-podbadc83cd_1ec1_4101_8058_782726fa564f.slice - libcontainer container kubepods-burstable-podbadc83cd_1ec1_4101_8058_782726fa564f.slice. Nov 8 00:30:54.595488 systemd[1]: Created slice kubepods-besteffort-pod23fa7156_ab47_44e8_be85_07831bed27aa.slice - libcontainer container kubepods-besteffort-pod23fa7156_ab47_44e8_be85_07831bed27aa.slice. Nov 8 00:30:54.602456 systemd[1]: Created slice kubepods-besteffort-podfa74907f_f0d6_4fee_8b02_a0f5214b5103.slice - libcontainer container kubepods-besteffort-podfa74907f_f0d6_4fee_8b02_a0f5214b5103.slice. Nov 8 00:30:54.609625 systemd[1]: Created slice kubepods-besteffort-podfa6c771d_e186_4cd9_a6e0_552ae2873655.slice - libcontainer container kubepods-besteffort-podfa6c771d_e186_4cd9_a6e0_552ae2873655.slice. Nov 8 00:30:54.617053 systemd[1]: Created slice kubepods-besteffort-pode0024d9c_a1f5_4e59_abcc_d8ad3577f9a2.slice - libcontainer container kubepods-besteffort-pode0024d9c_a1f5_4e59_abcc_d8ad3577f9a2.slice. Nov 8 00:30:54.625102 systemd[1]: Created slice kubepods-besteffort-pod353c4d02_7f56_4df1_98e1_7b89eab13038.slice - libcontainer container kubepods-besteffort-pod353c4d02_7f56_4df1_98e1_7b89eab13038.slice. Nov 8 00:30:54.651117 kubelet[2545]: I1108 00:30:54.651061 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/23fa7156-ab47-44e8-be85-07831bed27aa-tigera-ca-bundle\") pod \"calico-kube-controllers-b7584974-6v6qw\" (UID: \"23fa7156-ab47-44e8-be85-07831bed27aa\") " pod="calico-system/calico-kube-controllers-b7584974-6v6qw" Nov 8 00:30:54.651117 kubelet[2545]: I1108 00:30:54.651099 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/badc83cd-1ec1-4101-8058-782726fa564f-config-volume\") pod \"coredns-66bc5c9577-268vt\" (UID: \"badc83cd-1ec1-4101-8058-782726fa564f\") " pod="kube-system/coredns-66bc5c9577-268vt" Nov 8 00:30:54.651117 kubelet[2545]: I1108 00:30:54.651115 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/353c4d02-7f56-4df1-98e1-7b89eab13038-calico-apiserver-certs\") pod \"calico-apiserver-69548547f7-r5lxh\" (UID: \"353c4d02-7f56-4df1-98e1-7b89eab13038\") " pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" Nov 8 00:30:54.651117 kubelet[2545]: I1108 00:30:54.651128 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t488\" (UniqueName: \"kubernetes.io/projected/353c4d02-7f56-4df1-98e1-7b89eab13038-kube-api-access-6t488\") pod \"calico-apiserver-69548547f7-r5lxh\" (UID: \"353c4d02-7f56-4df1-98e1-7b89eab13038\") " pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" Nov 8 00:30:54.651117 kubelet[2545]: I1108 00:30:54.651142 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pcxg\" (UniqueName: \"kubernetes.io/projected/69a2d468-d7b2-4842-a119-55b88cf0a542-kube-api-access-2pcxg\") pod \"coredns-66bc5c9577-8bzvr\" (UID: \"69a2d468-d7b2-4842-a119-55b88cf0a542\") " pod="kube-system/coredns-66bc5c9577-8bzvr" Nov 8 00:30:54.652640 kubelet[2545]: I1108 00:30:54.651157 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa6c771d-e186-4cd9-a6e0-552ae2873655-config\") pod \"goldmane-7c778bb748-xpcnb\" (UID: \"fa6c771d-e186-4cd9-a6e0-552ae2873655\") " pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:54.652640 kubelet[2545]: I1108 00:30:54.651168 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-ca-bundle\") pod \"whisker-65554d4d5b-g4wqz\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " pod="calico-system/whisker-65554d4d5b-g4wqz" Nov 8 00:30:54.652640 kubelet[2545]: I1108 00:30:54.651181 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2-calico-apiserver-certs\") pod \"calico-apiserver-69548547f7-zwltf\" (UID: \"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2\") " pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" Nov 8 00:30:54.652640 kubelet[2545]: I1108 00:30:54.651194 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fa6c771d-e186-4cd9-a6e0-552ae2873655-goldmane-key-pair\") pod \"goldmane-7c778bb748-xpcnb\" (UID: \"fa6c771d-e186-4cd9-a6e0-552ae2873655\") " pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:54.652640 kubelet[2545]: I1108 00:30:54.651206 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-backend-key-pair\") pod \"whisker-65554d4d5b-g4wqz\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " pod="calico-system/whisker-65554d4d5b-g4wqz" Nov 8 00:30:54.652804 kubelet[2545]: I1108 00:30:54.651221 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqdb\" (UniqueName: \"kubernetes.io/projected/e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2-kube-api-access-bxqdb\") pod \"calico-apiserver-69548547f7-zwltf\" (UID: \"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2\") " pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" Nov 8 00:30:54.652804 kubelet[2545]: I1108 00:30:54.651236 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt52x\" (UniqueName: \"kubernetes.io/projected/23fa7156-ab47-44e8-be85-07831bed27aa-kube-api-access-bt52x\") pod \"calico-kube-controllers-b7584974-6v6qw\" (UID: \"23fa7156-ab47-44e8-be85-07831bed27aa\") " pod="calico-system/calico-kube-controllers-b7584974-6v6qw" Nov 8 00:30:54.652804 kubelet[2545]: I1108 00:30:54.651266 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2g2\" (UniqueName: \"kubernetes.io/projected/badc83cd-1ec1-4101-8058-782726fa564f-kube-api-access-vh2g2\") pod \"coredns-66bc5c9577-268vt\" (UID: \"badc83cd-1ec1-4101-8058-782726fa564f\") " pod="kube-system/coredns-66bc5c9577-268vt" Nov 8 00:30:54.652804 kubelet[2545]: I1108 00:30:54.651283 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69a2d468-d7b2-4842-a119-55b88cf0a542-config-volume\") pod \"coredns-66bc5c9577-8bzvr\" (UID: \"69a2d468-d7b2-4842-a119-55b88cf0a542\") " pod="kube-system/coredns-66bc5c9577-8bzvr" Nov 8 00:30:54.652804 kubelet[2545]: I1108 00:30:54.651300 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa6c771d-e186-4cd9-a6e0-552ae2873655-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-xpcnb\" (UID: \"fa6c771d-e186-4cd9-a6e0-552ae2873655\") " pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:54.652976 kubelet[2545]: I1108 00:30:54.651318 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7l5\" (UniqueName: \"kubernetes.io/projected/fa6c771d-e186-4cd9-a6e0-552ae2873655-kube-api-access-qq7l5\") pod \"goldmane-7c778bb748-xpcnb\" (UID: \"fa6c771d-e186-4cd9-a6e0-552ae2873655\") " pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:54.652976 kubelet[2545]: I1108 00:30:54.651334 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8v7\" (UniqueName: \"kubernetes.io/projected/fa74907f-f0d6-4fee-8b02-a0f5214b5103-kube-api-access-rb8v7\") pod \"whisker-65554d4d5b-g4wqz\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " pod="calico-system/whisker-65554d4d5b-g4wqz" Nov 8 00:30:54.749846 containerd[1500]: time="2025-11-08T00:30:54.749653828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:30:54.885081 containerd[1500]: time="2025-11-08T00:30:54.885021733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8bzvr,Uid:69a2d468-d7b2-4842-a119-55b88cf0a542,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:54.892830 containerd[1500]: time="2025-11-08T00:30:54.892795323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-268vt,Uid:badc83cd-1ec1-4101-8058-782726fa564f,Namespace:kube-system,Attempt:0,}" Nov 8 00:30:54.902319 containerd[1500]: time="2025-11-08T00:30:54.902245031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7584974-6v6qw,Uid:23fa7156-ab47-44e8-be85-07831bed27aa,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:54.916481 containerd[1500]: time="2025-11-08T00:30:54.916230686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xpcnb,Uid:fa6c771d-e186-4cd9-a6e0-552ae2873655,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:54.917157 containerd[1500]: time="2025-11-08T00:30:54.916811646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65554d4d5b-g4wqz,Uid:fa74907f-f0d6-4fee-8b02-a0f5214b5103,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:54.925479 containerd[1500]: time="2025-11-08T00:30:54.925453073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-zwltf,Uid:e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:30:54.954276 containerd[1500]: time="2025-11-08T00:30:54.954129741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-r5lxh,Uid:353c4d02-7f56-4df1-98e1-7b89eab13038,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:30:55.203661 containerd[1500]: time="2025-11-08T00:30:55.203606012Z" level=error msg="Failed to destroy network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.213634 containerd[1500]: time="2025-11-08T00:30:55.213416012Z" level=error msg="encountered an error cleaning up failed sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.213896 containerd[1500]: time="2025-11-08T00:30:55.213755155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xpcnb,Uid:fa6c771d-e186-4cd9-a6e0-552ae2873655,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.219819 kubelet[2545]: E1108 00:30:55.218005 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.219819 kubelet[2545]: E1108 00:30:55.218095 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:55.219819 kubelet[2545]: E1108 00:30:55.218119 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-xpcnb" Nov 8 00:30:55.219930 kubelet[2545]: E1108 00:30:55.218176 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:30:55.231415 containerd[1500]: time="2025-11-08T00:30:55.231379466Z" level=error msg="Failed to destroy network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.232039 containerd[1500]: time="2025-11-08T00:30:55.231990383Z" level=error msg="encountered an error cleaning up failed sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.232088 containerd[1500]: time="2025-11-08T00:30:55.232062891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-r5lxh,Uid:353c4d02-7f56-4df1-98e1-7b89eab13038,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.232264 kubelet[2545]: E1108 00:30:55.232231 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.232387 kubelet[2545]: E1108 00:30:55.232368 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" Nov 8 00:30:55.232483 kubelet[2545]: E1108 00:30:55.232463 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" Nov 8 00:30:55.232595 kubelet[2545]: E1108 00:30:55.232576 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:30:55.234875 containerd[1500]: time="2025-11-08T00:30:55.234846761Z" level=error msg="Failed to destroy network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.235836 containerd[1500]: time="2025-11-08T00:30:55.235604184Z" level=error msg="Failed to destroy network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.236115 containerd[1500]: time="2025-11-08T00:30:55.236079804Z" level=error msg="encountered an error cleaning up failed sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.236806 containerd[1500]: time="2025-11-08T00:30:55.236778938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8bzvr,Uid:69a2d468-d7b2-4842-a119-55b88cf0a542,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.237431 containerd[1500]: time="2025-11-08T00:30:55.236119499Z" level=error msg="encountered an error cleaning up failed sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.237497 containerd[1500]: time="2025-11-08T00:30:55.237466099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65554d4d5b-g4wqz,Uid:fa74907f-f0d6-4fee-8b02-a0f5214b5103,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.237541 containerd[1500]: time="2025-11-08T00:30:55.237082312Z" level=error msg="Failed to destroy network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238122 containerd[1500]: time="2025-11-08T00:30:55.238090630Z" level=error msg="encountered an error cleaning up failed sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238174 containerd[1500]: time="2025-11-08T00:30:55.238127791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-268vt,Uid:badc83cd-1ec1-4101-8058-782726fa564f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238340 containerd[1500]: time="2025-11-08T00:30:55.238314313Z" level=error msg="Failed to destroy network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238728 kubelet[2545]: E1108 00:30:55.238413 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238728 kubelet[2545]: E1108 00:30:55.238452 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65554d4d5b-g4wqz" Nov 8 00:30:55.238728 kubelet[2545]: E1108 00:30:55.238464 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65554d4d5b-g4wqz" Nov 8 00:30:55.238813 containerd[1500]: time="2025-11-08T00:30:55.238504685Z" level=error msg="encountered an error cleaning up failed sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.238813 containerd[1500]: time="2025-11-08T00:30:55.238533729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7584974-6v6qw,Uid:23fa7156-ab47-44e8-be85-07831bed27aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.239124 kubelet[2545]: E1108 00:30:55.238498 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65554d4d5b-g4wqz_calico-system(fa74907f-f0d6-4fee-8b02-a0f5214b5103)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65554d4d5b-g4wqz_calico-system(fa74907f-f0d6-4fee-8b02-a0f5214b5103)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65554d4d5b-g4wqz" podUID="fa74907f-f0d6-4fee-8b02-a0f5214b5103" Nov 8 00:30:55.239124 kubelet[2545]: E1108 00:30:55.238647 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.239124 kubelet[2545]: E1108 00:30:55.238666 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" Nov 8 00:30:55.239205 kubelet[2545]: E1108 00:30:55.238677 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" Nov 8 00:30:55.239205 kubelet[2545]: E1108 00:30:55.238701 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:30:55.239205 kubelet[2545]: E1108 00:30:55.238325 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.239584 kubelet[2545]: E1108 00:30:55.239304 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-268vt" Nov 8 00:30:55.239584 kubelet[2545]: E1108 00:30:55.239320 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-268vt" Nov 8 00:30:55.239584 kubelet[2545]: E1108 00:30:55.239380 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-268vt_kube-system(badc83cd-1ec1-4101-8058-782726fa564f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-268vt_kube-system(badc83cd-1ec1-4101-8058-782726fa564f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-268vt" podUID="badc83cd-1ec1-4101-8058-782726fa564f" Nov 8 00:30:55.239905 kubelet[2545]: E1108 00:30:55.239866 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.239905 kubelet[2545]: E1108 00:30:55.239898 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8bzvr" Nov 8 00:30:55.239905 kubelet[2545]: E1108 00:30:55.239910 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-8bzvr" Nov 8 00:30:55.240101 kubelet[2545]: E1108 00:30:55.239937 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-8bzvr_kube-system(69a2d468-d7b2-4842-a119-55b88cf0a542)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-8bzvr_kube-system(69a2d468-d7b2-4842-a119-55b88cf0a542)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8bzvr" podUID="69a2d468-d7b2-4842-a119-55b88cf0a542" Nov 8 00:30:55.252870 containerd[1500]: time="2025-11-08T00:30:55.252537428Z" level=error msg="Failed to destroy network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.252870 containerd[1500]: time="2025-11-08T00:30:55.252760560Z" level=error msg="encountered an error cleaning up failed sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.252870 containerd[1500]: time="2025-11-08T00:30:55.252794505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-zwltf,Uid:e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.254413 kubelet[2545]: E1108 00:30:55.253133 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.254413 kubelet[2545]: E1108 00:30:55.253188 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" Nov 8 00:30:55.254413 kubelet[2545]: E1108 00:30:55.253257 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" Nov 8 00:30:55.254527 kubelet[2545]: E1108 00:30:55.253549 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:30:55.645440 systemd[1]: Created slice kubepods-besteffort-podf0c2bf49_2c83_4e41_9990_a77826efb954.slice - libcontainer container kubepods-besteffort-podf0c2bf49_2c83_4e41_9990_a77826efb954.slice. Nov 8 00:30:55.649504 containerd[1500]: time="2025-11-08T00:30:55.649431997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mxdr,Uid:f0c2bf49-2c83-4e41-9990-a77826efb954,Namespace:calico-system,Attempt:0,}" Nov 8 00:30:55.702803 containerd[1500]: time="2025-11-08T00:30:55.702757711Z" level=error msg="Failed to destroy network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.703072 containerd[1500]: time="2025-11-08T00:30:55.703033463Z" level=error msg="encountered an error cleaning up failed sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.703119 containerd[1500]: time="2025-11-08T00:30:55.703079240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mxdr,Uid:f0c2bf49-2c83-4e41-9990-a77826efb954,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.703565 kubelet[2545]: E1108 00:30:55.703230 2545 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.703565 kubelet[2545]: E1108 00:30:55.703303 2545 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:55.703565 kubelet[2545]: E1108 00:30:55.703322 2545 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mxdr" Nov 8 00:30:55.703872 kubelet[2545]: E1108 00:30:55.703369 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:55.756288 kubelet[2545]: I1108 00:30:55.756178 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:30:55.786820 kubelet[2545]: I1108 00:30:55.786798 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:30:55.790289 kubelet[2545]: I1108 00:30:55.790010 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:30:55.793990 kubelet[2545]: I1108 00:30:55.793974 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:30:55.801714 kubelet[2545]: I1108 00:30:55.801697 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:30:55.807260 containerd[1500]: time="2025-11-08T00:30:55.807206028Z" level=info msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" Nov 8 00:30:55.808440 containerd[1500]: time="2025-11-08T00:30:55.807714982Z" level=info msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" Nov 8 00:30:55.808507 containerd[1500]: time="2025-11-08T00:30:55.808465752Z" level=info msg="Ensure that sandbox a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc in task-service has been cleanup successfully" Nov 8 00:30:55.808565 containerd[1500]: time="2025-11-08T00:30:55.808545663Z" level=info msg="Ensure that sandbox 1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c in task-service has been cleanup successfully" Nov 8 00:30:55.809855 containerd[1500]: time="2025-11-08T00:30:55.809828232Z" level=info msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" Nov 8 00:30:55.809971 containerd[1500]: time="2025-11-08T00:30:55.809942869Z" level=info msg="Ensure that sandbox 45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51 in task-service has been cleanup successfully" Nov 8 00:30:55.810865 kubelet[2545]: I1108 00:30:55.810848 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:30:55.812606 containerd[1500]: time="2025-11-08T00:30:55.812539864Z" level=info msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" Nov 8 00:30:55.812820 containerd[1500]: time="2025-11-08T00:30:55.812789608Z" level=info msg="Ensure that sandbox f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c in task-service has been cleanup successfully" Nov 8 00:30:55.816607 containerd[1500]: time="2025-11-08T00:30:55.816348314Z" level=info msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" Nov 8 00:30:55.816607 containerd[1500]: time="2025-11-08T00:30:55.816475936Z" level=info msg="Ensure that sandbox ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b in task-service has been cleanup successfully" Nov 8 00:30:55.817600 containerd[1500]: time="2025-11-08T00:30:55.809945173Z" level=info msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" Nov 8 00:30:55.817784 containerd[1500]: time="2025-11-08T00:30:55.817765958Z" level=info msg="Ensure that sandbox 03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273 in task-service has been cleanup successfully" Nov 8 00:30:55.824598 kubelet[2545]: I1108 00:30:55.824580 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:30:55.827850 containerd[1500]: time="2025-11-08T00:30:55.827822213Z" level=info msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" Nov 8 00:30:55.828467 containerd[1500]: time="2025-11-08T00:30:55.828438781Z" level=info msg="Ensure that sandbox 2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6 in task-service has been cleanup successfully" Nov 8 00:30:55.835225 kubelet[2545]: I1108 00:30:55.835024 2545 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:30:55.836760 containerd[1500]: time="2025-11-08T00:30:55.836331632Z" level=info msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" Nov 8 00:30:55.838912 containerd[1500]: time="2025-11-08T00:30:55.838889013Z" level=info msg="Ensure that sandbox 592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3 in task-service has been cleanup successfully" Nov 8 00:30:55.893882 containerd[1500]: time="2025-11-08T00:30:55.893440788Z" level=error msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" failed" error="failed to destroy network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.893987 kubelet[2545]: E1108 00:30:55.893623 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:30:55.893987 kubelet[2545]: E1108 00:30:55.893667 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c"} Nov 8 00:30:55.893987 kubelet[2545]: E1108 00:30:55.893709 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23fa7156-ab47-44e8-be85-07831bed27aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.893987 kubelet[2545]: E1108 00:30:55.893731 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23fa7156-ab47-44e8-be85-07831bed27aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:30:55.894780 containerd[1500]: time="2025-11-08T00:30:55.894671528Z" level=error msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" failed" error="failed to destroy network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.894824 kubelet[2545]: E1108 00:30:55.894781 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:30:55.894824 kubelet[2545]: E1108 00:30:55.894800 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c"} Nov 8 00:30:55.894864 kubelet[2545]: E1108 00:30:55.894837 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69a2d468-d7b2-4842-a119-55b88cf0a542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.894864 kubelet[2545]: E1108 00:30:55.894857 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69a2d468-d7b2-4842-a119-55b88cf0a542\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-8bzvr" podUID="69a2d468-d7b2-4842-a119-55b88cf0a542" Nov 8 00:30:55.897082 containerd[1500]: time="2025-11-08T00:30:55.896704776Z" level=error msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" failed" error="failed to destroy network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.897332 containerd[1500]: time="2025-11-08T00:30:55.897218970Z" level=error msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" failed" error="failed to destroy network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.898337 kubelet[2545]: E1108 00:30:55.898138 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:30:55.898337 kubelet[2545]: E1108 00:30:55.898164 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273"} Nov 8 00:30:55.898337 kubelet[2545]: E1108 00:30:55.898191 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.898337 kubelet[2545]: E1108 00:30:55.898211 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:30:55.898567 kubelet[2545]: E1108 00:30:55.898142 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:30:55.898567 kubelet[2545]: E1108 00:30:55.898228 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51"} Nov 8 00:30:55.899156 kubelet[2545]: E1108 00:30:55.898240 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa6c771d-e186-4cd9-a6e0-552ae2873655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.899156 kubelet[2545]: E1108 00:30:55.898660 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa6c771d-e186-4cd9-a6e0-552ae2873655\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:30:55.899336 containerd[1500]: time="2025-11-08T00:30:55.899196603Z" level=error msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" failed" error="failed to destroy network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.899672 kubelet[2545]: E1108 00:30:55.899562 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:30:55.899672 kubelet[2545]: E1108 00:30:55.899586 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc"} Nov 8 00:30:55.899672 kubelet[2545]: E1108 00:30:55.899603 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.899672 kubelet[2545]: E1108 00:30:55.899631 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65554d4d5b-g4wqz" podUID="fa74907f-f0d6-4fee-8b02-a0f5214b5103" Nov 8 00:30:55.904214 containerd[1500]: time="2025-11-08T00:30:55.904193041Z" level=error msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" failed" error="failed to destroy network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.905403 kubelet[2545]: E1108 00:30:55.905359 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:30:55.905403 kubelet[2545]: E1108 00:30:55.905401 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b"} Nov 8 00:30:55.906232 kubelet[2545]: E1108 00:30:55.905428 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0c2bf49-2c83-4e41-9990-a77826efb954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.906232 kubelet[2545]: E1108 00:30:55.905469 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0c2bf49-2c83-4e41-9990-a77826efb954\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:30:55.906561 containerd[1500]: time="2025-11-08T00:30:55.906521189Z" level=error msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" failed" error="failed to destroy network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.906754 kubelet[2545]: E1108 00:30:55.906657 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:30:55.906836 kubelet[2545]: E1108 00:30:55.906793 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6"} Nov 8 00:30:55.906836 kubelet[2545]: E1108 00:30:55.906821 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"badc83cd-1ec1-4101-8058-782726fa564f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.906980 kubelet[2545]: E1108 00:30:55.906846 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"badc83cd-1ec1-4101-8058-782726fa564f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-268vt" podUID="badc83cd-1ec1-4101-8058-782726fa564f" Nov 8 00:30:55.911201 containerd[1500]: time="2025-11-08T00:30:55.911174126Z" level=error msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" failed" error="failed to destroy network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:30:55.911337 kubelet[2545]: E1108 00:30:55.911293 2545 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:30:55.911337 kubelet[2545]: E1108 00:30:55.911319 2545 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3"} Nov 8 00:30:55.911395 kubelet[2545]: E1108 00:30:55.911338 2545 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"353c4d02-7f56-4df1-98e1-7b89eab13038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:30:55.911395 kubelet[2545]: E1108 00:30:55.911353 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"353c4d02-7f56-4df1-98e1-7b89eab13038\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:30:56.007948 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273-shm.mount: Deactivated successfully. Nov 8 00:30:56.008082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3-shm.mount: Deactivated successfully. Nov 8 00:30:56.008171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51-shm.mount: Deactivated successfully. Nov 8 00:30:56.008294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c-shm.mount: Deactivated successfully. Nov 8 00:30:56.008401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc-shm.mount: Deactivated successfully. Nov 8 00:30:56.008508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6-shm.mount: Deactivated successfully. Nov 8 00:30:56.008588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c-shm.mount: Deactivated successfully. Nov 8 00:31:02.177882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695807009.mount: Deactivated successfully. Nov 8 00:31:02.274283 containerd[1500]: time="2025-11-08T00:31:02.264725580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:31:02.284669 containerd[1500]: time="2025-11-08T00:31:02.283425921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.299929 containerd[1500]: time="2025-11-08T00:31:02.299891222Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.301816 containerd[1500]: time="2025-11-08T00:31:02.301135332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:31:02.303345 containerd[1500]: time="2025-11-08T00:31:02.303324736Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.549594394s" Nov 8 00:31:02.303415 containerd[1500]: time="2025-11-08T00:31:02.303402122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:31:02.322541 containerd[1500]: time="2025-11-08T00:31:02.322512788Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:31:02.420881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955104938.mount: Deactivated successfully. Nov 8 00:31:02.429780 containerd[1500]: time="2025-11-08T00:31:02.429690128Z" level=info msg="CreateContainer within sandbox \"7e90ac0153b51de54c6849492de08f3e433c22bd0aee9135732e01717479988a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3\"" Nov 8 00:31:02.443013 containerd[1500]: time="2025-11-08T00:31:02.441187729Z" level=info msg="StartContainer for \"96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3\"" Nov 8 00:31:02.516559 systemd[1]: Started cri-containerd-96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3.scope - libcontainer container 96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3. Nov 8 00:31:02.552573 containerd[1500]: time="2025-11-08T00:31:02.552544400Z" level=info msg="StartContainer for \"96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3\" returns successfully" Nov 8 00:31:02.646124 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:31:02.649211 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:31:02.886948 containerd[1500]: time="2025-11-08T00:31:02.886266639Z" level=info msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" Nov 8 00:31:03.015348 kubelet[2545]: I1108 00:31:02.982929 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fbzgm" podStartSLOduration=1.929556251 podStartE2EDuration="19.959891612s" podCreationTimestamp="2025-11-08 00:30:43 +0000 UTC" firstStartedPulling="2025-11-08 00:30:44.273751603 +0000 UTC m=+18.768253493" lastFinishedPulling="2025-11-08 00:31:02.304086966 +0000 UTC m=+36.798588854" observedRunningTime="2025-11-08 00:31:02.958108575 +0000 UTC m=+37.452610464" watchObservedRunningTime="2025-11-08 00:31:02.959891612 +0000 UTC m=+37.454393501" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.030 [INFO][3790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.032 [INFO][3790] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" iface="eth0" netns="/var/run/netns/cni-51096429-d6a0-97ca-2f3c-3b88d915fa63" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.033 [INFO][3790] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" iface="eth0" netns="/var/run/netns/cni-51096429-d6a0-97ca-2f3c-3b88d915fa63" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.035 [INFO][3790] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" iface="eth0" netns="/var/run/netns/cni-51096429-d6a0-97ca-2f3c-3b88d915fa63" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.035 [INFO][3790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.035 [INFO][3790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.214 [INFO][3818] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.216 [INFO][3818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.217 [INFO][3818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.231 [WARNING][3818] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.231 [INFO][3818] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.233 [INFO][3818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:03.237727 containerd[1500]: 2025-11-08 00:31:03.235 [INFO][3790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:03.239420 containerd[1500]: time="2025-11-08T00:31:03.239108607Z" level=info msg="TearDown network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" successfully" Nov 8 00:31:03.239420 containerd[1500]: time="2025-11-08T00:31:03.239145888Z" level=info msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" returns successfully" Nov 8 00:31:03.241079 systemd[1]: run-netns-cni\x2d51096429\x2dd6a0\x2d97ca\x2d2f3c\x2d3b88d915fa63.mount: Deactivated successfully. Nov 8 00:31:03.339374 kubelet[2545]: I1108 00:31:03.338954 2545 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-backend-key-pair\") pod \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " Nov 8 00:31:03.339374 kubelet[2545]: I1108 00:31:03.339030 2545 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8v7\" (UniqueName: \"kubernetes.io/projected/fa74907f-f0d6-4fee-8b02-a0f5214b5103-kube-api-access-rb8v7\") pod \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " Nov 8 00:31:03.376830 systemd[1]: var-lib-kubelet-pods-fa74907f\x2df0d6\x2d4fee\x2d8b02\x2da0f5214b5103-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drb8v7.mount: Deactivated successfully. Nov 8 00:31:03.384133 kubelet[2545]: I1108 00:31:03.378580 2545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa74907f-f0d6-4fee-8b02-a0f5214b5103-kube-api-access-rb8v7" (OuterVolumeSpecName: "kube-api-access-rb8v7") pod "fa74907f-f0d6-4fee-8b02-a0f5214b5103" (UID: "fa74907f-f0d6-4fee-8b02-a0f5214b5103"). InnerVolumeSpecName "kube-api-access-rb8v7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:31:03.384133 kubelet[2545]: I1108 00:31:03.382458 2545 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-ca-bundle\") pod \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\" (UID: \"fa74907f-f0d6-4fee-8b02-a0f5214b5103\") " Nov 8 00:31:03.384133 kubelet[2545]: I1108 00:31:03.382682 2545 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rb8v7\" (UniqueName: \"kubernetes.io/projected/fa74907f-f0d6-4fee-8b02-a0f5214b5103-kube-api-access-rb8v7\") on node \"ci-4081-3-6-n-6ee8ddef06\" DevicePath \"\"" Nov 8 00:31:03.384133 kubelet[2545]: I1108 00:31:03.383212 2545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fa74907f-f0d6-4fee-8b02-a0f5214b5103" (UID: "fa74907f-f0d6-4fee-8b02-a0f5214b5103"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:31:03.384133 kubelet[2545]: I1108 00:31:03.378443 2545 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fa74907f-f0d6-4fee-8b02-a0f5214b5103" (UID: "fa74907f-f0d6-4fee-8b02-a0f5214b5103"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:31:03.385914 systemd[1]: var-lib-kubelet-pods-fa74907f\x2df0d6\x2d4fee\x2d8b02\x2da0f5214b5103-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:31:03.483338 kubelet[2545]: I1108 00:31:03.483277 2545 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-ca-bundle\") on node \"ci-4081-3-6-n-6ee8ddef06\" DevicePath \"\"" Nov 8 00:31:03.483338 kubelet[2545]: I1108 00:31:03.483325 2545 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa74907f-f0d6-4fee-8b02-a0f5214b5103-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-6ee8ddef06\" DevicePath \"\"" Nov 8 00:31:03.676211 systemd[1]: Removed slice kubepods-besteffort-podfa74907f_f0d6_4fee_8b02_a0f5214b5103.slice - libcontainer container kubepods-besteffort-podfa74907f_f0d6_4fee_8b02_a0f5214b5103.slice. Nov 8 00:31:04.086977 systemd[1]: Created slice kubepods-besteffort-pod13800756_7bce_44ba_ac46_4639ec34a694.slice - libcontainer container kubepods-besteffort-pod13800756_7bce_44ba_ac46_4639ec34a694.slice. Nov 8 00:31:04.179721 systemd[1]: run-containerd-runc-k8s.io-96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3-runc.3HCSD4.mount: Deactivated successfully. Nov 8 00:31:04.191995 kubelet[2545]: I1108 00:31:04.191926 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vnd\" (UniqueName: \"kubernetes.io/projected/13800756-7bce-44ba-ac46-4639ec34a694-kube-api-access-28vnd\") pod \"whisker-6fcd4bdfbb-9mv84\" (UID: \"13800756-7bce-44ba-ac46-4639ec34a694\") " pod="calico-system/whisker-6fcd4bdfbb-9mv84" Nov 8 00:31:04.191995 kubelet[2545]: I1108 00:31:04.191991 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13800756-7bce-44ba-ac46-4639ec34a694-whisker-backend-key-pair\") pod \"whisker-6fcd4bdfbb-9mv84\" (UID: \"13800756-7bce-44ba-ac46-4639ec34a694\") " pod="calico-system/whisker-6fcd4bdfbb-9mv84" Nov 8 00:31:04.193157 kubelet[2545]: I1108 00:31:04.192021 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13800756-7bce-44ba-ac46-4639ec34a694-whisker-ca-bundle\") pod \"whisker-6fcd4bdfbb-9mv84\" (UID: \"13800756-7bce-44ba-ac46-4639ec34a694\") " pod="calico-system/whisker-6fcd4bdfbb-9mv84" Nov 8 00:31:04.399773 containerd[1500]: time="2025-11-08T00:31:04.399141315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fcd4bdfbb-9mv84,Uid:13800756-7bce-44ba-ac46-4639ec34a694,Namespace:calico-system,Attempt:0,}" Nov 8 00:31:04.569786 systemd-networkd[1398]: cali7e538d28546: Link UP Nov 8 00:31:04.571926 systemd-networkd[1398]: cali7e538d28546: Gained carrier Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.446 [INFO][3956] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.459 [INFO][3956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0 whisker-6fcd4bdfbb- calico-system 13800756-7bce-44ba-ac46-4639ec34a694 889 0 2025-11-08 00:31:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6fcd4bdfbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 whisker-6fcd4bdfbb-9mv84 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7e538d28546 [] [] }} ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.459 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.498 [INFO][3967] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" HandleID="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.498 [INFO][3967] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" HandleID="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"whisker-6fcd4bdfbb-9mv84", "timestamp":"2025-11-08 00:31:04.498691494 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.498 [INFO][3967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.499 [INFO][3967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.499 [INFO][3967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.509 [INFO][3967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.522 [INFO][3967] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.531 [INFO][3967] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.537 [INFO][3967] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.539 [INFO][3967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.539 [INFO][3967] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.542 [INFO][3967] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2 Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.548 [INFO][3967] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.555 [INFO][3967] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.556 [INFO][3967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.556 [INFO][3967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:04.597314 containerd[1500]: 2025-11-08 00:31:04.556 [INFO][3967] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" HandleID="k8s-pod-network.cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.559 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0", GenerateName:"whisker-6fcd4bdfbb-", Namespace:"calico-system", SelfLink:"", UID:"13800756-7bce-44ba-ac46-4639ec34a694", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fcd4bdfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"whisker-6fcd4bdfbb-9mv84", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e538d28546", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.559 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.129/32] ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.559 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e538d28546 ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.573 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.573 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0", GenerateName:"whisker-6fcd4bdfbb-", Namespace:"calico-system", SelfLink:"", UID:"13800756-7bce-44ba-ac46-4639ec34a694", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 31, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6fcd4bdfbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2", Pod:"whisker-6fcd4bdfbb-9mv84", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7e538d28546", MAC:"fa:66:44:ca:28:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:04.600015 containerd[1500]: 2025-11-08 00:31:04.588 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2" Namespace="calico-system" Pod="whisker-6fcd4bdfbb-9mv84" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--6fcd4bdfbb--9mv84-eth0" Nov 8 00:31:04.615201 containerd[1500]: time="2025-11-08T00:31:04.614882890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:04.615201 containerd[1500]: time="2025-11-08T00:31:04.614918446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:04.615201 containerd[1500]: time="2025-11-08T00:31:04.614927665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:04.615201 containerd[1500]: time="2025-11-08T00:31:04.614977368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:04.631353 systemd[1]: Started cri-containerd-cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2.scope - libcontainer container cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2. Nov 8 00:31:04.689792 containerd[1500]: time="2025-11-08T00:31:04.689636119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fcd4bdfbb-9mv84,Uid:13800756-7bce-44ba-ac46-4639ec34a694,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf362a20b64507ed90a68d4298a0302ba471d40bc12d177f35e42b1c20be56b2\"" Nov 8 00:31:04.692275 containerd[1500]: time="2025-11-08T00:31:04.691989351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:05.123461 containerd[1500]: time="2025-11-08T00:31:05.123386390Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:05.131776 containerd[1500]: time="2025-11-08T00:31:05.124574853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:05.131885 containerd[1500]: time="2025-11-08T00:31:05.124671736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:05.132062 kubelet[2545]: E1108 00:31:05.132001 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:05.132597 kubelet[2545]: E1108 00:31:05.132074 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:05.140095 kubelet[2545]: E1108 00:31:05.133740 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:05.141043 containerd[1500]: time="2025-11-08T00:31:05.141002799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:05.599731 containerd[1500]: time="2025-11-08T00:31:05.599448095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:05.601281 containerd[1500]: time="2025-11-08T00:31:05.601035220Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:05.601281 containerd[1500]: time="2025-11-08T00:31:05.601147982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:05.603178 kubelet[2545]: E1108 00:31:05.603116 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:05.603553 kubelet[2545]: E1108 00:31:05.603187 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:05.603553 kubelet[2545]: E1108 00:31:05.603305 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:05.603553 kubelet[2545]: E1108 00:31:05.603366 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:31:05.649349 kubelet[2545]: I1108 00:31:05.649301 2545 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa74907f-f0d6-4fee-8b02-a0f5214b5103" path="/var/lib/kubelet/pods/fa74907f-f0d6-4fee-8b02-a0f5214b5103/volumes" Nov 8 00:31:05.872530 systemd-networkd[1398]: cali7e538d28546: Gained IPv6LL Nov 8 00:31:05.929939 kubelet[2545]: E1108 00:31:05.929834 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:31:06.641592 containerd[1500]: time="2025-11-08T00:31:06.641184066Z" level=info msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" Nov 8 00:31:06.643961 containerd[1500]: time="2025-11-08T00:31:06.643438869Z" level=info msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.759 [INFO][4068] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.760 [INFO][4068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" iface="eth0" netns="/var/run/netns/cni-37ab8b15-9d91-5f8f-4ba6-8cf669eede94" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.760 [INFO][4068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" iface="eth0" netns="/var/run/netns/cni-37ab8b15-9d91-5f8f-4ba6-8cf669eede94" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.760 [INFO][4068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" iface="eth0" netns="/var/run/netns/cni-37ab8b15-9d91-5f8f-4ba6-8cf669eede94" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.761 [INFO][4068] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.761 [INFO][4068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.802 [INFO][4091] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.802 [INFO][4091] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.805 [INFO][4091] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.816 [WARNING][4091] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.816 [INFO][4091] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.818 [INFO][4091] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:06.824279 containerd[1500]: 2025-11-08 00:31:06.820 [INFO][4068] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:06.825150 systemd[1]: run-netns-cni\x2d37ab8b15\x2d9d91\x2d5f8f\x2d4ba6\x2d8cf669eede94.mount: Deactivated successfully. Nov 8 00:31:06.828083 containerd[1500]: time="2025-11-08T00:31:06.827957956Z" level=info msg="TearDown network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" successfully" Nov 8 00:31:06.828083 containerd[1500]: time="2025-11-08T00:31:06.827987060Z" level=info msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" returns successfully" Nov 8 00:31:06.831740 containerd[1500]: time="2025-11-08T00:31:06.831453519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7584974-6v6qw,Uid:23fa7156-ab47-44e8-be85-07831bed27aa,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.770 [INFO][4067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.771 [INFO][4067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" iface="eth0" netns="/var/run/netns/cni-b5629a72-e6eb-13f8-de9f-de99ff1e61f1" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.771 [INFO][4067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" iface="eth0" netns="/var/run/netns/cni-b5629a72-e6eb-13f8-de9f-de99ff1e61f1" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.772 [INFO][4067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" iface="eth0" netns="/var/run/netns/cni-b5629a72-e6eb-13f8-de9f-de99ff1e61f1" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.772 [INFO][4067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.772 [INFO][4067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.811 [INFO][4101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.811 [INFO][4101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.817 [INFO][4101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.827 [WARNING][4101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.827 [INFO][4101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.828 [INFO][4101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:06.837285 containerd[1500]: 2025-11-08 00:31:06.833 [INFO][4067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:06.838679 containerd[1500]: time="2025-11-08T00:31:06.838141345Z" level=info msg="TearDown network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" successfully" Nov 8 00:31:06.838679 containerd[1500]: time="2025-11-08T00:31:06.838162105Z" level=info msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" returns successfully" Nov 8 00:31:06.841218 containerd[1500]: time="2025-11-08T00:31:06.841187422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-zwltf,Uid:e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:06.845811 systemd[1]: run-netns-cni\x2db5629a72\x2de6eb\x2d13f8\x2dde9f\x2dde99ff1e61f1.mount: Deactivated successfully. Nov 8 00:31:07.030118 systemd-networkd[1398]: caliaa38c7c6f9a: Link UP Nov 8 00:31:07.030300 systemd-networkd[1398]: caliaa38c7c6f9a: Gained carrier Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.909 [INFO][4112] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.930 [INFO][4112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0 calico-kube-controllers-b7584974- calico-system 23fa7156-ab47-44e8-be85-07831bed27aa 916 0 2025-11-08 00:30:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b7584974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 calico-kube-controllers-b7584974-6v6qw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa38c7c6f9a [] [] }} ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.932 [INFO][4112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.976 [INFO][4137] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" HandleID="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.976 [INFO][4137] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" HandleID="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"calico-kube-controllers-b7584974-6v6qw", "timestamp":"2025-11-08 00:31:06.976472564 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.976 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.976 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.976 [INFO][4137] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:06.987 [INFO][4137] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.002 [INFO][4137] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.007 [INFO][4137] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.009 [INFO][4137] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.011 [INFO][4137] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.011 [INFO][4137] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.013 [INFO][4137] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949 Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.017 [INFO][4137] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.023 [INFO][4137] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.023 [INFO][4137] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.023 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:07.047611 containerd[1500]: 2025-11-08 00:31:07.023 [INFO][4137] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" HandleID="k8s-pod-network.ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.026 [INFO][4112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0", GenerateName:"calico-kube-controllers-b7584974-", Namespace:"calico-system", SelfLink:"", UID:"23fa7156-ab47-44e8-be85-07831bed27aa", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7584974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"calico-kube-controllers-b7584974-6v6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa38c7c6f9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.026 [INFO][4112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.130/32] ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.026 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa38c7c6f9a ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.031 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.032 [INFO][4112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0", GenerateName:"calico-kube-controllers-b7584974-", Namespace:"calico-system", SelfLink:"", UID:"23fa7156-ab47-44e8-be85-07831bed27aa", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7584974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949", Pod:"calico-kube-controllers-b7584974-6v6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa38c7c6f9a", MAC:"8a:12:7d:04:a8:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:07.049554 containerd[1500]: 2025-11-08 00:31:07.045 [INFO][4112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949" Namespace="calico-system" Pod="calico-kube-controllers-b7584974-6v6qw" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:07.063136 containerd[1500]: time="2025-11-08T00:31:07.062995726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:07.063338 containerd[1500]: time="2025-11-08T00:31:07.063127324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:07.063338 containerd[1500]: time="2025-11-08T00:31:07.063157051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:07.063468 containerd[1500]: time="2025-11-08T00:31:07.063335628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:07.080373 systemd[1]: Started cri-containerd-ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949.scope - libcontainer container ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949. Nov 8 00:31:07.123446 containerd[1500]: time="2025-11-08T00:31:07.123397484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b7584974-6v6qw,Uid:23fa7156-ab47-44e8-be85-07831bed27aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949\"" Nov 8 00:31:07.130712 containerd[1500]: time="2025-11-08T00:31:07.130613534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:07.132215 systemd-networkd[1398]: calic36f7f0af98: Link UP Nov 8 00:31:07.133353 systemd-networkd[1398]: calic36f7f0af98: Gained carrier Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:06.933 [INFO][4121] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:06.966 [INFO][4121] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0 calico-apiserver-69548547f7- calico-apiserver e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2 917 0 2025-11-08 00:30:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69548547f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 calico-apiserver-69548547f7-zwltf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic36f7f0af98 [] [] }} ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:06.966 [INFO][4121] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.001 [INFO][4145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" HandleID="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.001 [INFO][4145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" HandleID="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"calico-apiserver-69548547f7-zwltf", "timestamp":"2025-11-08 00:31:07.001388044 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.001 [INFO][4145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.023 [INFO][4145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.024 [INFO][4145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.087 [INFO][4145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.103 [INFO][4145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.108 [INFO][4145] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.110 [INFO][4145] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.112 [INFO][4145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.112 [INFO][4145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.116 [INFO][4145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873 Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.121 [INFO][4145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.125 [INFO][4145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.126 [INFO][4145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.126 [INFO][4145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:07.146599 containerd[1500]: 2025-11-08 00:31:07.126 [INFO][4145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" HandleID="k8s-pod-network.b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.128 [INFO][4121] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"calico-apiserver-69548547f7-zwltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic36f7f0af98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.128 [INFO][4121] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.131/32] ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.128 [INFO][4121] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic36f7f0af98 ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.134 [INFO][4121] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.134 [INFO][4121] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873", Pod:"calico-apiserver-69548547f7-zwltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic36f7f0af98", MAC:"22:c5:ad:ac:b1:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:07.148353 containerd[1500]: 2025-11-08 00:31:07.143 [INFO][4121] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-zwltf" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:07.161180 containerd[1500]: time="2025-11-08T00:31:07.160968624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:07.161180 containerd[1500]: time="2025-11-08T00:31:07.161010252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:07.161180 containerd[1500]: time="2025-11-08T00:31:07.161032535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:07.161180 containerd[1500]: time="2025-11-08T00:31:07.161110060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:07.176389 systemd[1]: Started cri-containerd-b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873.scope - libcontainer container b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873. Nov 8 00:31:07.214480 containerd[1500]: time="2025-11-08T00:31:07.214422317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-zwltf,Uid:e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873\"" Nov 8 00:31:07.562031 containerd[1500]: time="2025-11-08T00:31:07.561965034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:07.564138 containerd[1500]: time="2025-11-08T00:31:07.563606320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:07.564138 containerd[1500]: time="2025-11-08T00:31:07.563687373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:07.564377 kubelet[2545]: E1108 00:31:07.563888 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:07.564377 kubelet[2545]: E1108 00:31:07.563944 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:07.564377 kubelet[2545]: E1108 00:31:07.564122 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:07.564377 kubelet[2545]: E1108 00:31:07.564183 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:07.565061 containerd[1500]: time="2025-11-08T00:31:07.564881294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:07.939332 kubelet[2545]: E1108 00:31:07.939242 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:08.082131 containerd[1500]: time="2025-11-08T00:31:08.081817417Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:08.084137 containerd[1500]: time="2025-11-08T00:31:08.083947864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:08.084137 containerd[1500]: time="2025-11-08T00:31:08.084067781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:08.085416 kubelet[2545]: E1108 00:31:08.084839 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:08.085416 kubelet[2545]: E1108 00:31:08.084906 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:08.085416 kubelet[2545]: E1108 00:31:08.085018 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:08.085416 kubelet[2545]: E1108 00:31:08.085076 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:08.641668 containerd[1500]: time="2025-11-08T00:31:08.641344208Z" level=info msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.707 [INFO][4274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.707 [INFO][4274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" iface="eth0" netns="/var/run/netns/cni-b9ff04ad-c7ec-61e7-0a1c-20fa5b5ad962" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.708 [INFO][4274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" iface="eth0" netns="/var/run/netns/cni-b9ff04ad-c7ec-61e7-0a1c-20fa5b5ad962" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.708 [INFO][4274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" iface="eth0" netns="/var/run/netns/cni-b9ff04ad-c7ec-61e7-0a1c-20fa5b5ad962" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.708 [INFO][4274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.709 [INFO][4274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.735 [INFO][4281] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.735 [INFO][4281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.736 [INFO][4281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.744 [WARNING][4281] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.744 [INFO][4281] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.746 [INFO][4281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:08.752998 containerd[1500]: 2025-11-08 00:31:08.749 [INFO][4274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:08.758276 containerd[1500]: time="2025-11-08T00:31:08.754703071Z" level=info msg="TearDown network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" successfully" Nov 8 00:31:08.758276 containerd[1500]: time="2025-11-08T00:31:08.755748012Z" level=info msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" returns successfully" Nov 8 00:31:08.754867 systemd-networkd[1398]: caliaa38c7c6f9a: Gained IPv6LL Nov 8 00:31:08.763400 systemd[1]: run-netns-cni\x2db9ff04ad\x2dc7ec\x2d61e7\x2d0a1c\x2d20fa5b5ad962.mount: Deactivated successfully. Nov 8 00:31:08.768208 containerd[1500]: time="2025-11-08T00:31:08.768077701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8bzvr,Uid:69a2d468-d7b2-4842-a119-55b88cf0a542,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:08.817487 systemd-networkd[1398]: calic36f7f0af98: Gained IPv6LL Nov 8 00:31:08.947913 systemd-networkd[1398]: cali2d331f44d48: Link UP Nov 8 00:31:08.952477 kubelet[2545]: E1108 00:31:08.949629 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:08.951509 systemd-networkd[1398]: cali2d331f44d48: Gained carrier Nov 8 00:31:08.954546 kubelet[2545]: E1108 00:31:08.954358 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.820 [INFO][4287] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.835 [INFO][4287] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0 coredns-66bc5c9577- kube-system 69a2d468-d7b2-4842-a119-55b88cf0a542 939 0 2025-11-08 00:30:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 coredns-66bc5c9577-8bzvr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2d331f44d48 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.835 [INFO][4287] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.879 [INFO][4300] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" HandleID="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.880 [INFO][4300] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" HandleID="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"coredns-66bc5c9577-8bzvr", "timestamp":"2025-11-08 00:31:08.879916828 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.880 [INFO][4300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.880 [INFO][4300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.880 [INFO][4300] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.892 [INFO][4300] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.900 [INFO][4300] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.907 [INFO][4300] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.911 [INFO][4300] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.915 [INFO][4300] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.915 [INFO][4300] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.918 [INFO][4300] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.925 [INFO][4300] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.936 [INFO][4300] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.937 [INFO][4300] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.937 [INFO][4300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:08.982329 containerd[1500]: 2025-11-08 00:31:08.937 [INFO][4300] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" HandleID="k8s-pod-network.e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.986461 containerd[1500]: 2025-11-08 00:31:08.941 [INFO][4287] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"69a2d468-d7b2-4842-a119-55b88cf0a542", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"coredns-66bc5c9577-8bzvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d331f44d48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:08.986461 containerd[1500]: 2025-11-08 00:31:08.941 [INFO][4287] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.132/32] ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.986461 containerd[1500]: 2025-11-08 00:31:08.941 [INFO][4287] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d331f44d48 ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.986461 containerd[1500]: 2025-11-08 00:31:08.951 [INFO][4287] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:08.986461 containerd[1500]: 2025-11-08 00:31:08.953 [INFO][4287] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"69a2d468-d7b2-4842-a119-55b88cf0a542", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc", Pod:"coredns-66bc5c9577-8bzvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d331f44d48", MAC:"86:fb:4d:95:31:94", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:08.986753 containerd[1500]: 2025-11-08 00:31:08.976 [INFO][4287] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc" Namespace="kube-system" Pod="coredns-66bc5c9577-8bzvr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:09.018839 containerd[1500]: time="2025-11-08T00:31:09.017935514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:09.018839 containerd[1500]: time="2025-11-08T00:31:09.018006809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:09.018839 containerd[1500]: time="2025-11-08T00:31:09.018022719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.018839 containerd[1500]: time="2025-11-08T00:31:09.018111135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.064749 systemd[1]: Started cri-containerd-e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc.scope - libcontainer container e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc. Nov 8 00:31:09.100043 containerd[1500]: time="2025-11-08T00:31:09.099943002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8bzvr,Uid:69a2d468-d7b2-4842-a119-55b88cf0a542,Namespace:kube-system,Attempt:1,} returns sandbox id \"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc\"" Nov 8 00:31:09.107544 containerd[1500]: time="2025-11-08T00:31:09.107505329Z" level=info msg="CreateContainer within sandbox \"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:09.133030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount885400607.mount: Deactivated successfully. Nov 8 00:31:09.135159 containerd[1500]: time="2025-11-08T00:31:09.135125511Z" level=info msg="CreateContainer within sandbox \"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17193972d9a3f27e6afe52712361e67c0921f9ad6bd9326f8afb861ccbde7f4a\"" Nov 8 00:31:09.135795 containerd[1500]: time="2025-11-08T00:31:09.135773022Z" level=info msg="StartContainer for \"17193972d9a3f27e6afe52712361e67c0921f9ad6bd9326f8afb861ccbde7f4a\"" Nov 8 00:31:09.182823 systemd[1]: Started cri-containerd-17193972d9a3f27e6afe52712361e67c0921f9ad6bd9326f8afb861ccbde7f4a.scope - libcontainer container 17193972d9a3f27e6afe52712361e67c0921f9ad6bd9326f8afb861ccbde7f4a. Nov 8 00:31:09.219239 containerd[1500]: time="2025-11-08T00:31:09.218420205Z" level=info msg="StartContainer for \"17193972d9a3f27e6afe52712361e67c0921f9ad6bd9326f8afb861ccbde7f4a\" returns successfully" Nov 8 00:31:09.640942 containerd[1500]: time="2025-11-08T00:31:09.640908394Z" level=info msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" Nov 8 00:31:09.641715 containerd[1500]: time="2025-11-08T00:31:09.641115724Z" level=info msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" iface="eth0" netns="/var/run/netns/cni-eb85a22d-5e92-5858-c44e-2afe7d7b9a0b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" iface="eth0" netns="/var/run/netns/cni-eb85a22d-5e92-5858-c44e-2afe7d7b9a0b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" iface="eth0" netns="/var/run/netns/cni-eb85a22d-5e92-5858-c44e-2afe7d7b9a0b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.717 [INFO][4432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.748 [INFO][4442] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.748 [INFO][4442] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.748 [INFO][4442] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.766 [WARNING][4442] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.766 [INFO][4442] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.768 [INFO][4442] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:09.772368 containerd[1500]: 2025-11-08 00:31:09.770 [INFO][4432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:09.774359 containerd[1500]: time="2025-11-08T00:31:09.772929982Z" level=info msg="TearDown network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" successfully" Nov 8 00:31:09.774359 containerd[1500]: time="2025-11-08T00:31:09.772963465Z" level=info msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" returns successfully" Nov 8 00:31:09.777005 containerd[1500]: time="2025-11-08T00:31:09.776803224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mxdr,Uid:f0c2bf49-2c83-4e41-9990-a77826efb954,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.739 [INFO][4424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.739 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" iface="eth0" netns="/var/run/netns/cni-3a75d9e2-35e7-e23c-8f51-c1cf9828d6af" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.739 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" iface="eth0" netns="/var/run/netns/cni-3a75d9e2-35e7-e23c-8f51-c1cf9828d6af" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.739 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" iface="eth0" netns="/var/run/netns/cni-3a75d9e2-35e7-e23c-8f51-c1cf9828d6af" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.740 [INFO][4424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.740 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.760 [INFO][4448] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.761 [INFO][4448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.768 [INFO][4448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.779 [WARNING][4448] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.779 [INFO][4448] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.782 [INFO][4448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:09.787354 containerd[1500]: 2025-11-08 00:31:09.784 [INFO][4424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:09.787354 containerd[1500]: time="2025-11-08T00:31:09.787376608Z" level=info msg="TearDown network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" successfully" Nov 8 00:31:09.790688 containerd[1500]: time="2025-11-08T00:31:09.787393218Z" level=info msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" returns successfully" Nov 8 00:31:09.790688 containerd[1500]: time="2025-11-08T00:31:09.789453273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xpcnb,Uid:fa6c771d-e186-4cd9-a6e0-552ae2873655,Namespace:calico-system,Attempt:1,}" Nov 8 00:31:09.905688 systemd-networkd[1398]: cali724d28be9b9: Link UP Nov 8 00:31:09.909316 systemd-networkd[1398]: cali724d28be9b9: Gained carrier Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.826 [INFO][4456] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.839 [INFO][4456] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0 csi-node-driver- calico-system f0c2bf49-2c83-4e41-9990-a77826efb954 958 0 2025-11-08 00:30:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 csi-node-driver-4mxdr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali724d28be9b9 [] [] }} ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.839 [INFO][4456] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.861 [INFO][4480] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" HandleID="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.862 [INFO][4480] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" HandleID="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"csi-node-driver-4mxdr", "timestamp":"2025-11-08 00:31:09.861615218 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.862 [INFO][4480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.862 [INFO][4480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.862 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.868 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.874 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.878 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.880 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.883 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.883 [INFO][4480] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.884 [INFO][4480] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7 Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.887 [INFO][4480] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4480] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:09.923796 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4480] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" HandleID="k8s-pod-network.89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.896 [INFO][4456] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0c2bf49-2c83-4e41-9990-a77826efb954", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"csi-node-driver-4mxdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d28be9b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.896 [INFO][4456] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.133/32] ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.896 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali724d28be9b9 ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.911 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.911 [INFO][4456] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0c2bf49-2c83-4e41-9990-a77826efb954", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7", Pod:"csi-node-driver-4mxdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d28be9b9", MAC:"fe:60:4d:c3:5a:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:09.924347 containerd[1500]: 2025-11-08 00:31:09.921 [INFO][4456] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7" Namespace="calico-system" Pod="csi-node-driver-4mxdr" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:09.938482 containerd[1500]: time="2025-11-08T00:31:09.938057173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:09.938482 containerd[1500]: time="2025-11-08T00:31:09.938126534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:09.938482 containerd[1500]: time="2025-11-08T00:31:09.938140770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.938868 containerd[1500]: time="2025-11-08T00:31:09.938398686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:09.963395 systemd[1]: Started cri-containerd-89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7.scope - libcontainer container 89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7. Nov 8 00:31:09.985988 kubelet[2545]: I1108 00:31:09.984476 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8bzvr" podStartSLOduration=37.984460195 podStartE2EDuration="37.984460195s" podCreationTimestamp="2025-11-08 00:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:09.963524265 +0000 UTC m=+44.458026154" watchObservedRunningTime="2025-11-08 00:31:09.984460195 +0000 UTC m=+44.478962084" Nov 8 00:31:10.000344 containerd[1500]: time="2025-11-08T00:31:10.000256907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mxdr,Uid:f0c2bf49-2c83-4e41-9990-a77826efb954,Namespace:calico-system,Attempt:1,} returns sandbox id \"89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7\"" Nov 8 00:31:10.004920 containerd[1500]: time="2025-11-08T00:31:10.004564797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:10.028031 systemd[1]: run-netns-cni\x2deb85a22d\x2d5e92\x2d5858\x2dc44e\x2d2afe7d7b9a0b.mount: Deactivated successfully. Nov 8 00:31:10.029404 systemd[1]: run-netns-cni\x2d3a75d9e2\x2d35e7\x2de23c\x2d8f51\x2dc1cf9828d6af.mount: Deactivated successfully. Nov 8 00:31:10.034974 systemd-networkd[1398]: caliab5d6739c46: Link UP Nov 8 00:31:10.035915 systemd-networkd[1398]: caliab5d6739c46: Gained carrier Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.825 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.839 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0 goldmane-7c778bb748- calico-system fa6c771d-e186-4cd9-a6e0-552ae2873655 959 0 2025-11-08 00:30:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 goldmane-7c778bb748-xpcnb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliab5d6739c46 [] [] }} ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.839 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.883 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" HandleID="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.883 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" HandleID="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"goldmane-7c778bb748-xpcnb", "timestamp":"2025-11-08 00:31:09.883138696 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.883 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.894 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.972 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:09.983 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.005 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.008 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.012 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.012 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.014 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.017 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.026 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.134/26] block=192.168.12.128/26 handle="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.026 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.134/26] handle="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.026 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.049697 containerd[1500]: 2025-11-08 00:31:10.026 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.134/26] IPv6=[] ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" HandleID="k8s-pod-network.0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.029 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa6c771d-e186-4cd9-a6e0-552ae2873655", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"goldmane-7c778bb748-xpcnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab5d6739c46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.029 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.134/32] ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.030 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab5d6739c46 ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.036 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.037 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa6c771d-e186-4cd9-a6e0-552ae2873655", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d", Pod:"goldmane-7c778bb748-xpcnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab5d6739c46", MAC:"d6:f7:10:e3:76:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:10.050340 containerd[1500]: 2025-11-08 00:31:10.047 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d" Namespace="calico-system" Pod="goldmane-7c778bb748-xpcnb" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:10.066982 containerd[1500]: time="2025-11-08T00:31:10.066640446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:10.066982 containerd[1500]: time="2025-11-08T00:31:10.066691053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:10.066982 containerd[1500]: time="2025-11-08T00:31:10.066728633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:10.066982 containerd[1500]: time="2025-11-08T00:31:10.066812822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:10.096437 systemd[1]: Started cri-containerd-0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d.scope - libcontainer container 0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d. Nov 8 00:31:10.129362 containerd[1500]: time="2025-11-08T00:31:10.129315666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-xpcnb,Uid:fa6c771d-e186-4cd9-a6e0-552ae2873655,Namespace:calico-system,Attempt:1,} returns sandbox id \"0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d\"" Nov 8 00:31:10.224502 systemd-networkd[1398]: cali2d331f44d48: Gained IPv6LL Nov 8 00:31:10.427923 containerd[1500]: time="2025-11-08T00:31:10.427881790Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:10.429280 containerd[1500]: time="2025-11-08T00:31:10.429221636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:10.429489 containerd[1500]: time="2025-11-08T00:31:10.429382870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:10.429924 kubelet[2545]: E1108 00:31:10.429872 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:10.430005 kubelet[2545]: E1108 00:31:10.429985 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:10.430198 kubelet[2545]: E1108 00:31:10.430158 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:10.430852 containerd[1500]: time="2025-11-08T00:31:10.430812916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:10.641776 containerd[1500]: time="2025-11-08T00:31:10.641702342Z" level=info msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.692 [INFO][4622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.692 [INFO][4622] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" iface="eth0" netns="/var/run/netns/cni-10afb01f-2149-baa7-cb48-149464aef04c" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.692 [INFO][4622] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" iface="eth0" netns="/var/run/netns/cni-10afb01f-2149-baa7-cb48-149464aef04c" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.696 [INFO][4622] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" iface="eth0" netns="/var/run/netns/cni-10afb01f-2149-baa7-cb48-149464aef04c" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.696 [INFO][4622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.696 [INFO][4622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.722 [INFO][4629] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.723 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.723 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.729 [WARNING][4629] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.729 [INFO][4629] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.731 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.738284 containerd[1500]: 2025-11-08 00:31:10.733 [INFO][4622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:10.739285 containerd[1500]: time="2025-11-08T00:31:10.738854740Z" level=info msg="TearDown network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" successfully" Nov 8 00:31:10.739285 containerd[1500]: time="2025-11-08T00:31:10.738906788Z" level=info msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" returns successfully" Nov 8 00:31:10.739397 systemd[1]: run-netns-cni\x2d10afb01f\x2d2149\x2dbaa7\x2dcb48\x2d149464aef04c.mount: Deactivated successfully. Nov 8 00:31:10.742902 containerd[1500]: time="2025-11-08T00:31:10.742834452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-268vt,Uid:badc83cd-1ec1-4101-8058-782726fa564f,Namespace:kube-system,Attempt:1,}" Nov 8 00:31:10.856828 systemd-networkd[1398]: cali405451cfc68: Link UP Nov 8 00:31:10.858519 systemd-networkd[1398]: cali405451cfc68: Gained carrier Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.778 [INFO][4636] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.788 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0 coredns-66bc5c9577- kube-system badc83cd-1ec1-4101-8058-782726fa564f 981 0 2025-11-08 00:30:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 coredns-66bc5c9577-268vt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali405451cfc68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.788 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.811 [INFO][4648] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" HandleID="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.812 [INFO][4648] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" HandleID="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"coredns-66bc5c9577-268vt", "timestamp":"2025-11-08 00:31:10.811855023 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.812 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.812 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.812 [INFO][4648] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.819 [INFO][4648] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.826 [INFO][4648] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.832 [INFO][4648] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.834 [INFO][4648] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.837 [INFO][4648] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.837 [INFO][4648] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.838 [INFO][4648] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461 Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.843 [INFO][4648] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.852 [INFO][4648] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.135/26] block=192.168.12.128/26 handle="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.852 [INFO][4648] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.135/26] handle="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.852 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:10.877948 containerd[1500]: 2025-11-08 00:31:10.852 [INFO][4648] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.135/26] IPv6=[] ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" HandleID="k8s-pod-network.6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.879811 containerd[1500]: 2025-11-08 00:31:10.854 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"badc83cd-1ec1-4101-8058-782726fa564f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"coredns-66bc5c9577-268vt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali405451cfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:10.879811 containerd[1500]: 2025-11-08 00:31:10.855 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.135/32] ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.879811 containerd[1500]: 2025-11-08 00:31:10.855 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali405451cfc68 ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.879811 containerd[1500]: 2025-11-08 00:31:10.857 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.879811 containerd[1500]: 2025-11-08 00:31:10.858 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"badc83cd-1ec1-4101-8058-782726fa564f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461", Pod:"coredns-66bc5c9577-268vt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali405451cfc68", MAC:"0a:9b:a2:2d:65:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:10.880181 containerd[1500]: 2025-11-08 00:31:10.872 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461" Namespace="kube-system" Pod="coredns-66bc5c9577-268vt" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:10.890484 containerd[1500]: time="2025-11-08T00:31:10.889979875Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:10.892961 containerd[1500]: time="2025-11-08T00:31:10.892052221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:10.892961 containerd[1500]: time="2025-11-08T00:31:10.892132431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:10.895303 kubelet[2545]: E1108 00:31:10.895243 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:10.895424 kubelet[2545]: E1108 00:31:10.895408 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:10.895648 kubelet[2545]: E1108 00:31:10.895625 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:10.902078 kubelet[2545]: E1108 00:31:10.901637 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:10.902493 containerd[1500]: time="2025-11-08T00:31:10.902476289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:10.906910 containerd[1500]: time="2025-11-08T00:31:10.906861847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:10.907005 containerd[1500]: time="2025-11-08T00:31:10.906982143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:10.907079 containerd[1500]: time="2025-11-08T00:31:10.907062485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:10.907232 containerd[1500]: time="2025-11-08T00:31:10.907190155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:10.921409 systemd[1]: Started cri-containerd-6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461.scope - libcontainer container 6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461. Nov 8 00:31:10.971499 kubelet[2545]: E1108 00:31:10.970804 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:10.972417 containerd[1500]: time="2025-11-08T00:31:10.972391728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-268vt,Uid:badc83cd-1ec1-4101-8058-782726fa564f,Namespace:kube-system,Attempt:1,} returns sandbox id \"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461\"" Nov 8 00:31:10.980159 containerd[1500]: time="2025-11-08T00:31:10.980130815Z" level=info msg="CreateContainer within sandbox \"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:31:10.994667 containerd[1500]: time="2025-11-08T00:31:10.994635255Z" level=info msg="CreateContainer within sandbox \"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6bcef2e76aaf487d2d84a21f43b6236951959eb4effc5b8f0a796c3f3db78c3\"" Nov 8 00:31:10.995734 containerd[1500]: time="2025-11-08T00:31:10.995696635Z" level=info msg="StartContainer for \"a6bcef2e76aaf487d2d84a21f43b6236951959eb4effc5b8f0a796c3f3db78c3\"" Nov 8 00:31:11.023399 systemd[1]: Started cri-containerd-a6bcef2e76aaf487d2d84a21f43b6236951959eb4effc5b8f0a796c3f3db78c3.scope - libcontainer container a6bcef2e76aaf487d2d84a21f43b6236951959eb4effc5b8f0a796c3f3db78c3. Nov 8 00:31:11.061503 containerd[1500]: time="2025-11-08T00:31:11.061208254Z" level=info msg="StartContainer for \"a6bcef2e76aaf487d2d84a21f43b6236951959eb4effc5b8f0a796c3f3db78c3\" returns successfully" Nov 8 00:31:11.120402 systemd-networkd[1398]: cali724d28be9b9: Gained IPv6LL Nov 8 00:31:11.312462 systemd-networkd[1398]: caliab5d6739c46: Gained IPv6LL Nov 8 00:31:11.333356 containerd[1500]: time="2025-11-08T00:31:11.332835795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:11.335208 containerd[1500]: time="2025-11-08T00:31:11.334644295Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:11.335208 containerd[1500]: time="2025-11-08T00:31:11.334792053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:11.335381 kubelet[2545]: E1108 00:31:11.335063 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:11.335381 kubelet[2545]: E1108 00:31:11.335121 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:11.341372 kubelet[2545]: E1108 00:31:11.341333 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:11.341664 kubelet[2545]: E1108 00:31:11.341592 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:31:11.641720 containerd[1500]: time="2025-11-08T00:31:11.641380934Z" level=info msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.687 [INFO][4772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.687 [INFO][4772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" iface="eth0" netns="/var/run/netns/cni-2f6a291a-97f2-e23d-b75e-44c821405ddc" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.687 [INFO][4772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" iface="eth0" netns="/var/run/netns/cni-2f6a291a-97f2-e23d-b75e-44c821405ddc" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.688 [INFO][4772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" iface="eth0" netns="/var/run/netns/cni-2f6a291a-97f2-e23d-b75e-44c821405ddc" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.688 [INFO][4772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.688 [INFO][4772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.710 [INFO][4780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.710 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.710 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.718 [WARNING][4780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.718 [INFO][4780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.720 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.724439 containerd[1500]: 2025-11-08 00:31:11.722 [INFO][4772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:11.727327 containerd[1500]: time="2025-11-08T00:31:11.726396210Z" level=info msg="TearDown network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" successfully" Nov 8 00:31:11.727327 containerd[1500]: time="2025-11-08T00:31:11.726427919Z" level=info msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" returns successfully" Nov 8 00:31:11.728347 systemd[1]: run-netns-cni\x2d2f6a291a\x2d97f2\x2de23d\x2db75e\x2d44c821405ddc.mount: Deactivated successfully. Nov 8 00:31:11.730764 containerd[1500]: time="2025-11-08T00:31:11.730694079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-r5lxh,Uid:353c4d02-7f56-4df1-98e1-7b89eab13038,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:31:11.858101 systemd-networkd[1398]: calie63075af086: Link UP Nov 8 00:31:11.861239 systemd-networkd[1398]: calie63075af086: Gained carrier Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.777 [INFO][4787] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.791 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0 calico-apiserver-69548547f7- calico-apiserver 353c4d02-7f56-4df1-98e1-7b89eab13038 1000 0 2025-11-08 00:30:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69548547f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-6ee8ddef06 calico-apiserver-69548547f7-r5lxh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie63075af086 [] [] }} ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.791 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.816 [INFO][4799] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" HandleID="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.816 [INFO][4799] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" HandleID="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-6ee8ddef06", "pod":"calico-apiserver-69548547f7-r5lxh", "timestamp":"2025-11-08 00:31:11.81685925 +0000 UTC"}, Hostname:"ci-4081-3-6-n-6ee8ddef06", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.817 [INFO][4799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.817 [INFO][4799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.817 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-6ee8ddef06' Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.823 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.828 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.833 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.835 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.838 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.838 [INFO][4799] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.839 [INFO][4799] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.844 [INFO][4799] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.852 [INFO][4799] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.136/26] block=192.168.12.128/26 handle="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.852 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.136/26] handle="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" host="ci-4081-3-6-n-6ee8ddef06" Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.852 [INFO][4799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:11.882087 containerd[1500]: 2025-11-08 00:31:11.852 [INFO][4799] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.136/26] IPv6=[] ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" HandleID="k8s-pod-network.1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.855 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"353c4d02-7f56-4df1-98e1-7b89eab13038", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"", Pod:"calico-apiserver-69548547f7-r5lxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie63075af086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.855 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.136/32] ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.855 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie63075af086 ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.859 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.860 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"353c4d02-7f56-4df1-98e1-7b89eab13038", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b", Pod:"calico-apiserver-69548547f7-r5lxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie63075af086", MAC:"7e:05:6b:db:a7:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:11.884725 containerd[1500]: 2025-11-08 00:31:11.875 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b" Namespace="calico-apiserver" Pod="calico-apiserver-69548547f7-r5lxh" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:11.905422 containerd[1500]: time="2025-11-08T00:31:11.905047012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:31:11.905422 containerd[1500]: time="2025-11-08T00:31:11.905118067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:31:11.905422 containerd[1500]: time="2025-11-08T00:31:11.905150047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.906360 containerd[1500]: time="2025-11-08T00:31:11.905243613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:31:11.934460 systemd[1]: Started cri-containerd-1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b.scope - libcontainer container 1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b. Nov 8 00:31:11.982160 kubelet[2545]: E1108 00:31:11.982066 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:11.982787 kubelet[2545]: E1108 00:31:11.982694 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:31:11.996231 containerd[1500]: time="2025-11-08T00:31:11.996105867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69548547f7-r5lxh,Uid:353c4d02-7f56-4df1-98e1-7b89eab13038,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b\"" Nov 8 00:31:11.998077 containerd[1500]: time="2025-11-08T00:31:11.997999365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:12.036083 kubelet[2545]: I1108 00:31:12.036016 2545 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-268vt" podStartSLOduration=40.035998751 podStartE2EDuration="40.035998751s" podCreationTimestamp="2025-11-08 00:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:31:12.019977952 +0000 UTC m=+46.514479841" watchObservedRunningTime="2025-11-08 00:31:12.035998751 +0000 UTC m=+46.530500640" Nov 8 00:31:12.208406 systemd-networkd[1398]: cali405451cfc68: Gained IPv6LL Nov 8 00:31:12.436670 containerd[1500]: time="2025-11-08T00:31:12.436597833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:12.438308 containerd[1500]: time="2025-11-08T00:31:12.438266028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:12.438426 containerd[1500]: time="2025-11-08T00:31:12.438346179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:12.438553 kubelet[2545]: E1108 00:31:12.438494 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:12.439343 kubelet[2545]: E1108 00:31:12.438553 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:12.439343 kubelet[2545]: E1108 00:31:12.438637 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:12.439343 kubelet[2545]: E1108 00:31:12.438670 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:12.984519 kubelet[2545]: E1108 00:31:12.984464 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:13.104480 systemd-networkd[1398]: calie63075af086: Gained IPv6LL Nov 8 00:31:13.186200 kubelet[2545]: I1108 00:31:13.186157 2545 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:31:13.989541 kubelet[2545]: E1108 00:31:13.989077 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:14.203359 kernel: bpftool[4936]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:31:14.451858 systemd-networkd[1398]: vxlan.calico: Link UP Nov 8 00:31:14.451865 systemd-networkd[1398]: vxlan.calico: Gained carrier Nov 8 00:31:16.432934 systemd-networkd[1398]: vxlan.calico: Gained IPv6LL Nov 8 00:31:19.647380 containerd[1500]: time="2025-11-08T00:31:19.647077073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:20.093146 containerd[1500]: time="2025-11-08T00:31:20.093064269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:20.094732 containerd[1500]: time="2025-11-08T00:31:20.094699447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:20.095128 containerd[1500]: time="2025-11-08T00:31:20.094769138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:20.095202 kubelet[2545]: E1108 00:31:20.095040 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:20.095202 kubelet[2545]: E1108 00:31:20.095087 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:20.095767 kubelet[2545]: E1108 00:31:20.095198 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:20.095767 kubelet[2545]: E1108 00:31:20.095231 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:21.642540 containerd[1500]: time="2025-11-08T00:31:21.642076077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:22.091581 containerd[1500]: time="2025-11-08T00:31:22.091428711Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:22.092544 containerd[1500]: time="2025-11-08T00:31:22.092484920Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:22.092699 containerd[1500]: time="2025-11-08T00:31:22.092569269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:22.092801 kubelet[2545]: E1108 00:31:22.092706 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:22.092801 kubelet[2545]: E1108 00:31:22.092760 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:22.093195 kubelet[2545]: E1108 00:31:22.092990 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:22.093195 kubelet[2545]: E1108 00:31:22.093037 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:22.093874 containerd[1500]: time="2025-11-08T00:31:22.093727899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:22.536666 containerd[1500]: time="2025-11-08T00:31:22.536521444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:22.537942 containerd[1500]: time="2025-11-08T00:31:22.537889399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:22.538114 containerd[1500]: time="2025-11-08T00:31:22.537982274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:22.538550 kubelet[2545]: E1108 00:31:22.538343 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:22.538550 kubelet[2545]: E1108 00:31:22.538384 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:22.538550 kubelet[2545]: E1108 00:31:22.538482 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:22.539618 containerd[1500]: time="2025-11-08T00:31:22.539587535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:22.966418 containerd[1500]: time="2025-11-08T00:31:22.966335928Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:22.968555 containerd[1500]: time="2025-11-08T00:31:22.968241424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:22.968725 containerd[1500]: time="2025-11-08T00:31:22.968348917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:22.969082 kubelet[2545]: E1108 00:31:22.968888 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:22.969082 kubelet[2545]: E1108 00:31:22.968940 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:22.969082 kubelet[2545]: E1108 00:31:22.969027 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:22.970958 kubelet[2545]: E1108 00:31:22.969128 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:31:24.641846 containerd[1500]: time="2025-11-08T00:31:24.640863108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:25.086775 containerd[1500]: time="2025-11-08T00:31:25.086693265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:25.088261 containerd[1500]: time="2025-11-08T00:31:25.088172468Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:25.088359 containerd[1500]: time="2025-11-08T00:31:25.088311139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:25.088621 kubelet[2545]: E1108 00:31:25.088526 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:25.089178 kubelet[2545]: E1108 00:31:25.088619 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:25.089178 kubelet[2545]: E1108 00:31:25.088880 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:25.089969 containerd[1500]: time="2025-11-08T00:31:25.089856698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:25.519090 containerd[1500]: time="2025-11-08T00:31:25.518396091Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:25.521218 containerd[1500]: time="2025-11-08T00:31:25.520995182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:25.521218 containerd[1500]: time="2025-11-08T00:31:25.521119766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:25.521995 kubelet[2545]: E1108 00:31:25.521795 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:25.521995 kubelet[2545]: E1108 00:31:25.521853 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:25.523092 kubelet[2545]: E1108 00:31:25.522131 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:25.523092 kubelet[2545]: E1108 00:31:25.522179 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:25.523328 containerd[1500]: time="2025-11-08T00:31:25.522382842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:25.638811 containerd[1500]: time="2025-11-08T00:31:25.638063684Z" level=info msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.737 [WARNING][5044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa6c771d-e186-4cd9-a6e0-552ae2873655", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d", Pod:"goldmane-7c778bb748-xpcnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab5d6739c46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.740 [INFO][5044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.740 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" iface="eth0" netns="" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.740 [INFO][5044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.740 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.759 [INFO][5055] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.759 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.759 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.765 [WARNING][5055] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.765 [INFO][5055] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.766 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:25.770117 containerd[1500]: 2025-11-08 00:31:25.768 [INFO][5044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.770890 containerd[1500]: time="2025-11-08T00:31:25.770072563Z" level=info msg="TearDown network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" successfully" Nov 8 00:31:25.770890 containerd[1500]: time="2025-11-08T00:31:25.770398176Z" level=info msg="StopPodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" returns successfully" Nov 8 00:31:25.771241 containerd[1500]: time="2025-11-08T00:31:25.771216306Z" level=info msg="RemovePodSandbox for \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" Nov 8 00:31:25.771241 containerd[1500]: time="2025-11-08T00:31:25.771242034Z" level=info msg="Forcibly stopping sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\"" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.801 [WARNING][5069] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fa6c771d-e186-4cd9-a6e0-552ae2873655", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"0bf62585751e5299e07f7775efafbd77c43a2fd05de7947d0a8b85a136ece03d", Pod:"goldmane-7c778bb748-xpcnb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliab5d6739c46", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.801 [INFO][5069] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.801 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" iface="eth0" netns="" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.801 [INFO][5069] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.801 [INFO][5069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.820 [INFO][5076] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.820 [INFO][5076] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.820 [INFO][5076] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.825 [WARNING][5076] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.825 [INFO][5076] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" HandleID="k8s-pod-network.45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-goldmane--7c778bb748--xpcnb-eth0" Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.826 [INFO][5076] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:25.830287 containerd[1500]: 2025-11-08 00:31:25.828 [INFO][5069] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51" Nov 8 00:31:25.830287 containerd[1500]: time="2025-11-08T00:31:25.830136629Z" level=info msg="TearDown network for sandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" successfully" Nov 8 00:31:25.845051 containerd[1500]: time="2025-11-08T00:31:25.844998937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:25.845126 containerd[1500]: time="2025-11-08T00:31:25.845059571Z" level=info msg="RemovePodSandbox \"45aaa73544776fe152d683aed5bb38164a450b473399e51cbc7c52b23cac0b51\" returns successfully" Nov 8 00:31:25.845725 containerd[1500]: time="2025-11-08T00:31:25.845520879Z" level=info msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.871 [WARNING][5091] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.871 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.871 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" iface="eth0" netns="" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.872 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.872 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.893 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.893 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.893 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.900 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.900 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.901 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:25.905231 containerd[1500]: 2025-11-08 00:31:25.903 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.905936 containerd[1500]: time="2025-11-08T00:31:25.905244434Z" level=info msg="TearDown network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" successfully" Nov 8 00:31:25.905936 containerd[1500]: time="2025-11-08T00:31:25.905320488Z" level=info msg="StopPodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" returns successfully" Nov 8 00:31:25.905936 containerd[1500]: time="2025-11-08T00:31:25.905875312Z" level=info msg="RemovePodSandbox for \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" Nov 8 00:31:25.905936 containerd[1500]: time="2025-11-08T00:31:25.905920287Z" level=info msg="Forcibly stopping sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\"" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.939 [WARNING][5113] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" WorkloadEndpoint="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.940 [INFO][5113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.940 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" iface="eth0" netns="" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.940 [INFO][5113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.940 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.955 [INFO][5121] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.955 [INFO][5121] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.955 [INFO][5121] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.960 [WARNING][5121] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.961 [INFO][5121] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" HandleID="k8s-pod-network.a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-whisker--65554d4d5b--g4wqz-eth0" Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.962 [INFO][5121] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:25.965445 containerd[1500]: 2025-11-08 00:31:25.963 [INFO][5113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc" Nov 8 00:31:25.966309 containerd[1500]: time="2025-11-08T00:31:25.965502515Z" level=info msg="TearDown network for sandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" successfully" Nov 8 00:31:25.968471 containerd[1500]: time="2025-11-08T00:31:25.968440094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:25.969147 containerd[1500]: time="2025-11-08T00:31:25.968615343Z" level=info msg="RemovePodSandbox \"a1c1c1713377f54650699c75434de9a04d2a75ec03f9aa1865551ecb68e42abc\" returns successfully" Nov 8 00:31:25.969702 containerd[1500]: time="2025-11-08T00:31:25.969678664Z" level=info msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" Nov 8 00:31:25.973340 containerd[1500]: time="2025-11-08T00:31:25.973221781Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:25.974136 containerd[1500]: time="2025-11-08T00:31:25.974105976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:25.974300 containerd[1500]: time="2025-11-08T00:31:25.974170667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:25.975391 kubelet[2545]: E1108 00:31:25.975355 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:25.975457 kubelet[2545]: E1108 00:31:25.975397 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:25.975580 kubelet[2545]: E1108 00:31:25.975549 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:25.975653 kubelet[2545]: E1108 00:31:25.975590 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:31:25.976479 containerd[1500]: time="2025-11-08T00:31:25.976456028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.008 [WARNING][5135] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"353c4d02-7f56-4df1-98e1-7b89eab13038", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b", Pod:"calico-apiserver-69548547f7-r5lxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie63075af086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.008 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.008 [INFO][5135] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" iface="eth0" netns="" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.008 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.008 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.038 [INFO][5143] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.039 [INFO][5143] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.039 [INFO][5143] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.045 [WARNING][5143] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.045 [INFO][5143] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.047 [INFO][5143] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.053182 containerd[1500]: 2025-11-08 00:31:26.051 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.055517 containerd[1500]: time="2025-11-08T00:31:26.053121539Z" level=info msg="TearDown network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" successfully" Nov 8 00:31:26.055517 containerd[1500]: time="2025-11-08T00:31:26.053646007Z" level=info msg="StopPodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" returns successfully" Nov 8 00:31:26.055517 containerd[1500]: time="2025-11-08T00:31:26.055319045Z" level=info msg="RemovePodSandbox for \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" Nov 8 00:31:26.055517 containerd[1500]: time="2025-11-08T00:31:26.055340014Z" level=info msg="Forcibly stopping sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\"" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.101 [WARNING][5157] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"353c4d02-7f56-4df1-98e1-7b89eab13038", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"1c60bd3d89abb8ecf54f5437d9d682539dcf861d382ee606c9426e00c24dff2b", Pod:"calico-apiserver-69548547f7-r5lxh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie63075af086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.101 [INFO][5157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.101 [INFO][5157] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" iface="eth0" netns="" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.101 [INFO][5157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.102 [INFO][5157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.118 [INFO][5164] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.118 [INFO][5164] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.118 [INFO][5164] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.124 [WARNING][5164] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.124 [INFO][5164] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" HandleID="k8s-pod-network.592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--r5lxh-eth0" Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.126 [INFO][5164] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.130063 containerd[1500]: 2025-11-08 00:31:26.128 [INFO][5157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3" Nov 8 00:31:26.131296 containerd[1500]: time="2025-11-08T00:31:26.130581974Z" level=info msg="TearDown network for sandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" successfully" Nov 8 00:31:26.135319 containerd[1500]: time="2025-11-08T00:31:26.135005478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.135319 containerd[1500]: time="2025-11-08T00:31:26.135066092Z" level=info msg="RemovePodSandbox \"592bdc9f0aca59624c4fca02c37edeb99ff39566441f301020b1b6bc741430b3\" returns successfully" Nov 8 00:31:26.135557 containerd[1500]: time="2025-11-08T00:31:26.135534574Z" level=info msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.163 [WARNING][5179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0", GenerateName:"calico-kube-controllers-b7584974-", Namespace:"calico-system", SelfLink:"", UID:"23fa7156-ab47-44e8-be85-07831bed27aa", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7584974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949", Pod:"calico-kube-controllers-b7584974-6v6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa38c7c6f9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.164 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.164 [INFO][5179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" iface="eth0" netns="" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.164 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.164 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.185 [INFO][5186] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.186 [INFO][5186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.186 [INFO][5186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.191 [WARNING][5186] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.191 [INFO][5186] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.193 [INFO][5186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.196824 containerd[1500]: 2025-11-08 00:31:26.195 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.197205 containerd[1500]: time="2025-11-08T00:31:26.196884202Z" level=info msg="TearDown network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" successfully" Nov 8 00:31:26.197205 containerd[1500]: time="2025-11-08T00:31:26.196938854Z" level=info msg="StopPodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" returns successfully" Nov 8 00:31:26.197656 containerd[1500]: time="2025-11-08T00:31:26.197603455Z" level=info msg="RemovePodSandbox for \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" Nov 8 00:31:26.197656 containerd[1500]: time="2025-11-08T00:31:26.197637699Z" level=info msg="Forcibly stopping sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\"" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.233 [WARNING][5201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0", GenerateName:"calico-kube-controllers-b7584974-", Namespace:"calico-system", SelfLink:"", UID:"23fa7156-ab47-44e8-be85-07831bed27aa", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b7584974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"ab64a0b2a26611543f18d527fc6e4d602884a9f4def07831143e915bcdb3e949", Pod:"calico-kube-controllers-b7584974-6v6qw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa38c7c6f9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.233 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.233 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" iface="eth0" netns="" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.233 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.233 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.259 [INFO][5208] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.259 [INFO][5208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.259 [INFO][5208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.265 [WARNING][5208] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.265 [INFO][5208] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" HandleID="k8s-pod-network.1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--kube--controllers--b7584974--6v6qw-eth0" Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.266 [INFO][5208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.270038 containerd[1500]: 2025-11-08 00:31:26.268 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c" Nov 8 00:31:26.270435 containerd[1500]: time="2025-11-08T00:31:26.270089318Z" level=info msg="TearDown network for sandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" successfully" Nov 8 00:31:26.274142 containerd[1500]: time="2025-11-08T00:31:26.274058517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.274142 containerd[1500]: time="2025-11-08T00:31:26.274119392Z" level=info msg="RemovePodSandbox \"1f374607ea52a4b8bf622bd678181f8553c3eab80f03e1017e98ff098fc27f5c\" returns successfully" Nov 8 00:31:26.274569 containerd[1500]: time="2025-11-08T00:31:26.274548660Z" level=info msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.306 [WARNING][5222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873", Pod:"calico-apiserver-69548547f7-zwltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic36f7f0af98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.306 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.306 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" iface="eth0" netns="" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.306 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.306 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.326 [INFO][5229] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.326 [INFO][5229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.326 [INFO][5229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.331 [WARNING][5229] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.331 [INFO][5229] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.332 [INFO][5229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.336977 containerd[1500]: 2025-11-08 00:31:26.334 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.336977 containerd[1500]: time="2025-11-08T00:31:26.335477626Z" level=info msg="TearDown network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" successfully" Nov 8 00:31:26.336977 containerd[1500]: time="2025-11-08T00:31:26.335501330Z" level=info msg="StopPodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" returns successfully" Nov 8 00:31:26.336977 containerd[1500]: time="2025-11-08T00:31:26.336048260Z" level=info msg="RemovePodSandbox for \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" Nov 8 00:31:26.336977 containerd[1500]: time="2025-11-08T00:31:26.336068909Z" level=info msg="Forcibly stopping sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\"" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.361 [WARNING][5243] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0", GenerateName:"calico-apiserver-69548547f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69548547f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"b5658e5002b664ee8690939bba90c5534077015803465f84b695f388c0961873", Pod:"calico-apiserver-69548547f7-zwltf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic36f7f0af98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.361 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.361 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" iface="eth0" netns="" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.361 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.361 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.378 [INFO][5250] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.378 [INFO][5250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.378 [INFO][5250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.385 [WARNING][5250] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.385 [INFO][5250] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" HandleID="k8s-pod-network.03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-calico--apiserver--69548547f7--zwltf-eth0" Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.387 [INFO][5250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.391570 containerd[1500]: 2025-11-08 00:31:26.390 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273" Nov 8 00:31:26.391940 containerd[1500]: time="2025-11-08T00:31:26.391597491Z" level=info msg="TearDown network for sandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" successfully" Nov 8 00:31:26.395212 containerd[1500]: time="2025-11-08T00:31:26.395171047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.395278 containerd[1500]: time="2025-11-08T00:31:26.395241108Z" level=info msg="RemovePodSandbox \"03d4c6b90bb8bba4b2bc110d97fa65265b80e699edff4218e392b1a093586273\" returns successfully" Nov 8 00:31:26.395768 containerd[1500]: time="2025-11-08T00:31:26.395713577Z" level=info msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" Nov 8 00:31:26.397143 containerd[1500]: time="2025-11-08T00:31:26.397085810Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:26.398186 containerd[1500]: time="2025-11-08T00:31:26.398018725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:26.398186 containerd[1500]: time="2025-11-08T00:31:26.398128762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:26.398932 kubelet[2545]: E1108 00:31:26.398436 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:26.398932 kubelet[2545]: E1108 00:31:26.398488 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:26.398932 kubelet[2545]: E1108 00:31:26.398561 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:26.398932 kubelet[2545]: E1108 00:31:26.398591 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.426 [WARNING][5264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"badc83cd-1ec1-4101-8058-782726fa564f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461", Pod:"coredns-66bc5c9577-268vt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali405451cfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.426 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.426 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" iface="eth0" netns="" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.426 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.426 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.448 [INFO][5271] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.448 [INFO][5271] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.448 [INFO][5271] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.456 [WARNING][5271] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.456 [INFO][5271] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.458 [INFO][5271] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.461972 containerd[1500]: 2025-11-08 00:31:26.460 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.462464 containerd[1500]: time="2025-11-08T00:31:26.462025192Z" level=info msg="TearDown network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" successfully" Nov 8 00:31:26.462464 containerd[1500]: time="2025-11-08T00:31:26.462057794Z" level=info msg="StopPodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" returns successfully" Nov 8 00:31:26.462647 containerd[1500]: time="2025-11-08T00:31:26.462620752Z" level=info msg="RemovePodSandbox for \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" Nov 8 00:31:26.462678 containerd[1500]: time="2025-11-08T00:31:26.462654286Z" level=info msg="Forcibly stopping sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\"" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.495 [WARNING][5285] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"badc83cd-1ec1-4101-8058-782726fa564f", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"6dcc977d82e52ef5c1685484e47e23a6cdb288f1dc1b78ace82757535bfe9461", Pod:"coredns-66bc5c9577-268vt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali405451cfc68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.496 [INFO][5285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.496 [INFO][5285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" iface="eth0" netns="" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.496 [INFO][5285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.496 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.515 [INFO][5293] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.515 [INFO][5293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.515 [INFO][5293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.522 [WARNING][5293] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.522 [INFO][5293] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" HandleID="k8s-pod-network.2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--268vt-eth0" Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.523 [INFO][5293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.527297 containerd[1500]: 2025-11-08 00:31:26.525 [INFO][5285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6" Nov 8 00:31:26.528071 containerd[1500]: time="2025-11-08T00:31:26.527326756Z" level=info msg="TearDown network for sandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" successfully" Nov 8 00:31:26.530303 containerd[1500]: time="2025-11-08T00:31:26.530270846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.530365 containerd[1500]: time="2025-11-08T00:31:26.530310782Z" level=info msg="RemovePodSandbox \"2d4e0ca983f3a552bbc80fe3134be733306d5e91496d1c5c2f7f8f44dadd22d6\" returns successfully" Nov 8 00:31:26.530795 containerd[1500]: time="2025-11-08T00:31:26.530767821Z" level=info msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.564 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"69a2d468-d7b2-4842-a119-55b88cf0a542", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc", Pod:"coredns-66bc5c9577-8bzvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d331f44d48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.565 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.565 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" iface="eth0" netns="" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.565 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.565 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.592 [INFO][5316] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.592 [INFO][5316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.592 [INFO][5316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.600 [WARNING][5316] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.601 [INFO][5316] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.602 [INFO][5316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.609352 containerd[1500]: 2025-11-08 00:31:26.605 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.609352 containerd[1500]: time="2025-11-08T00:31:26.606884558Z" level=info msg="TearDown network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" successfully" Nov 8 00:31:26.609352 containerd[1500]: time="2025-11-08T00:31:26.606924663Z" level=info msg="StopPodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" returns successfully" Nov 8 00:31:26.609352 containerd[1500]: time="2025-11-08T00:31:26.608956787Z" level=info msg="RemovePodSandbox for \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" Nov 8 00:31:26.609352 containerd[1500]: time="2025-11-08T00:31:26.608997183Z" level=info msg="Forcibly stopping sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\"" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.649 [WARNING][5330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"69a2d468-d7b2-4842-a119-55b88cf0a542", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"e0c7a5e9a98d67832cef440075368b81145ccc672b5599513fdcbbe19e4ed0dc", Pod:"coredns-66bc5c9577-8bzvr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2d331f44d48", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.649 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.649 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" iface="eth0" netns="" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.649 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.649 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.672 [INFO][5337] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.672 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.672 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.679 [WARNING][5337] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.679 [INFO][5337] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" HandleID="k8s-pod-network.f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-coredns--66bc5c9577--8bzvr-eth0" Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.681 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.687837 containerd[1500]: 2025-11-08 00:31:26.683 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c" Nov 8 00:31:26.687837 containerd[1500]: time="2025-11-08T00:31:26.687471696Z" level=info msg="TearDown network for sandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" successfully" Nov 8 00:31:26.691368 containerd[1500]: time="2025-11-08T00:31:26.691328343Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.691368 containerd[1500]: time="2025-11-08T00:31:26.691374030Z" level=info msg="RemovePodSandbox \"f2847646eb2e57e05417e728b0acc5bb092b42589527d82b03a1f2de6c9b8c6c\" returns successfully" Nov 8 00:31:26.691748 containerd[1500]: time="2025-11-08T00:31:26.691720842Z" level=info msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.724 [WARNING][5351] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0c2bf49-2c83-4e41-9990-a77826efb954", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7", Pod:"csi-node-driver-4mxdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d28be9b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.725 [INFO][5351] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.725 [INFO][5351] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" iface="eth0" netns="" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.725 [INFO][5351] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.725 [INFO][5351] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.742 [INFO][5358] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.742 [INFO][5358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.743 [INFO][5358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.748 [WARNING][5358] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.749 [INFO][5358] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.750 [INFO][5358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.754687 containerd[1500]: 2025-11-08 00:31:26.753 [INFO][5351] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.755099 containerd[1500]: time="2025-11-08T00:31:26.754748848Z" level=info msg="TearDown network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" successfully" Nov 8 00:31:26.755099 containerd[1500]: time="2025-11-08T00:31:26.754792199Z" level=info msg="StopPodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" returns successfully" Nov 8 00:31:26.755607 containerd[1500]: time="2025-11-08T00:31:26.755368744Z" level=info msg="RemovePodSandbox for \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" Nov 8 00:31:26.755607 containerd[1500]: time="2025-11-08T00:31:26.755402187Z" level=info msg="Forcibly stopping sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\"" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.785 [WARNING][5372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0c2bf49-2c83-4e41-9990-a77826efb954", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 30, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-6ee8ddef06", ContainerID:"89f0579a6bc136ccaa19b2cefe57d0fa50260fc70e3998f1b76386528da734b7", Pod:"csi-node-driver-4mxdr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali724d28be9b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.785 [INFO][5372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.785 [INFO][5372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" iface="eth0" netns="" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.785 [INFO][5372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.785 [INFO][5372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.803 [INFO][5380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.803 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.804 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.810 [WARNING][5380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.810 [INFO][5380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" HandleID="k8s-pod-network.ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Workload="ci--4081--3--6--n--6ee8ddef06-k8s-csi--node--driver--4mxdr-eth0" Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.812 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:31:26.817922 containerd[1500]: 2025-11-08 00:31:26.815 [INFO][5372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b" Nov 8 00:31:26.818500 containerd[1500]: time="2025-11-08T00:31:26.818018819Z" level=info msg="TearDown network for sandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" successfully" Nov 8 00:31:26.823272 containerd[1500]: time="2025-11-08T00:31:26.823194317Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:31:26.823314 containerd[1500]: time="2025-11-08T00:31:26.823290579Z" level=info msg="RemovePodSandbox \"ea68bb16af46412f3d9653aabd77337e515f004e745cc71326659ae62b86f61b\" returns successfully" Nov 8 00:31:31.643351 kubelet[2545]: E1108 00:31:31.643293 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:33.963821 systemd[1]: run-containerd-runc-k8s.io-96a0d527e1135c76c76070754d2299b5e0f6502c3191c61bfeafbf9147d5b2b3-runc.aQ5p1E.mount: Deactivated successfully. Nov 8 00:31:36.644278 kubelet[2545]: E1108 00:31:36.642701 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:37.642936 kubelet[2545]: E1108 00:31:37.642889 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:31:39.642683 kubelet[2545]: E1108 00:31:39.642390 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:39.645314 kubelet[2545]: E1108 00:31:39.643796 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:31:40.643419 kubelet[2545]: E1108 00:31:40.643366 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:46.642523 containerd[1500]: time="2025-11-08T00:31:46.642441045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:47.064868 containerd[1500]: time="2025-11-08T00:31:47.064535199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:47.066355 containerd[1500]: time="2025-11-08T00:31:47.066167638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:47.066355 containerd[1500]: time="2025-11-08T00:31:47.066302165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:47.068312 kubelet[2545]: E1108 00:31:47.066509 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:47.068312 kubelet[2545]: E1108 00:31:47.066598 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:47.068312 kubelet[2545]: E1108 00:31:47.066727 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:47.068312 kubelet[2545]: E1108 00:31:47.066776 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:47.643611 containerd[1500]: time="2025-11-08T00:31:47.642917569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:31:48.084840 containerd[1500]: time="2025-11-08T00:31:48.084656017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:48.086389 containerd[1500]: time="2025-11-08T00:31:48.086313495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:31:48.086484 containerd[1500]: time="2025-11-08T00:31:48.086421664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:48.086672 kubelet[2545]: E1108 00:31:48.086625 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:48.087127 kubelet[2545]: E1108 00:31:48.086687 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:31:48.087127 kubelet[2545]: E1108 00:31:48.086791 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:48.087127 kubelet[2545]: E1108 00:31:48.086839 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:50.642237 containerd[1500]: time="2025-11-08T00:31:50.641961277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:31:51.084008 containerd[1500]: time="2025-11-08T00:31:51.083708611Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:51.085417 containerd[1500]: time="2025-11-08T00:31:51.085358260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:31:51.085586 containerd[1500]: time="2025-11-08T00:31:51.085461681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:31:51.085675 kubelet[2545]: E1108 00:31:51.085606 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:51.085977 kubelet[2545]: E1108 00:31:51.085681 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:31:51.085977 kubelet[2545]: E1108 00:31:51.085785 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:51.087378 containerd[1500]: time="2025-11-08T00:31:51.087330092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:31:51.510236 containerd[1500]: time="2025-11-08T00:31:51.510159093Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:51.512011 containerd[1500]: time="2025-11-08T00:31:51.511924086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:31:51.512173 containerd[1500]: time="2025-11-08T00:31:51.512094949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:31:51.512806 kubelet[2545]: E1108 00:31:51.512450 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:51.512806 kubelet[2545]: E1108 00:31:51.512501 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:31:51.512806 kubelet[2545]: E1108 00:31:51.512565 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:51.512912 kubelet[2545]: E1108 00:31:51.512598 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:31:51.647530 containerd[1500]: time="2025-11-08T00:31:51.647457402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:31:52.282175 containerd[1500]: time="2025-11-08T00:31:52.282130396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:52.283588 containerd[1500]: time="2025-11-08T00:31:52.283527954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:31:52.283670 containerd[1500]: time="2025-11-08T00:31:52.283626084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:52.283896 kubelet[2545]: E1108 00:31:52.283849 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.284221 kubelet[2545]: E1108 00:31:52.283900 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:31:52.284221 kubelet[2545]: E1108 00:31:52.284018 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:52.285031 kubelet[2545]: E1108 00:31:52.284385 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:31:52.641839 containerd[1500]: time="2025-11-08T00:31:52.641185404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:31:53.095967 containerd[1500]: time="2025-11-08T00:31:53.095912660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:53.097066 containerd[1500]: time="2025-11-08T00:31:53.097036226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:31:53.097166 containerd[1500]: time="2025-11-08T00:31:53.097055111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:31:53.097314 kubelet[2545]: E1108 00:31:53.097273 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:53.097380 kubelet[2545]: E1108 00:31:53.097319 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:31:53.097525 kubelet[2545]: E1108 00:31:53.097475 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:53.097525 kubelet[2545]: E1108 00:31:53.097514 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:31:53.098063 containerd[1500]: time="2025-11-08T00:31:53.098015858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:31:53.550878 containerd[1500]: time="2025-11-08T00:31:53.550742139Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:53.552117 containerd[1500]: time="2025-11-08T00:31:53.551910970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:31:53.552117 containerd[1500]: time="2025-11-08T00:31:53.551986418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:31:53.552232 kubelet[2545]: E1108 00:31:53.552160 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:53.552232 kubelet[2545]: E1108 00:31:53.552204 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:31:53.552542 kubelet[2545]: E1108 00:31:53.552296 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:53.554633 containerd[1500]: time="2025-11-08T00:31:53.554605895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:31:54.019565 containerd[1500]: time="2025-11-08T00:31:54.019402876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:31:54.021275 containerd[1500]: time="2025-11-08T00:31:54.021230188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:31:54.021484 containerd[1500]: time="2025-11-08T00:31:54.021319713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:31:54.022470 kubelet[2545]: E1108 00:31:54.021799 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:54.022470 kubelet[2545]: E1108 00:31:54.021921 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:31:54.022470 kubelet[2545]: E1108 00:31:54.022012 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:31:54.022759 kubelet[2545]: E1108 00:31:54.022075 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:31:56.179636 systemd[1]: Started sshd@7-157.180.31.220:22-45.135.232.47:40171.service - OpenSSH per-connection server daemon (45.135.232.47:40171). Nov 8 00:31:56.564996 sshd[5431]: Invalid user default from 45.135.232.47 port 40171 Nov 8 00:31:56.621465 sshd[5431]: Received disconnect from 45.135.232.47 port 40171:11: Client disconnecting normally [preauth] Nov 8 00:31:56.621465 sshd[5431]: Disconnected from invalid user default 45.135.232.47 port 40171 [preauth] Nov 8 00:31:56.622714 systemd[1]: sshd@7-157.180.31.220:22-45.135.232.47:40171.service: Deactivated successfully. Nov 8 00:31:57.640990 kubelet[2545]: E1108 00:31:57.640690 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:31:58.642231 kubelet[2545]: E1108 00:31:58.641879 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:31:59.978443 systemd[1]: Started sshd@8-157.180.31.220:22-147.75.109.163:46582.service - OpenSSH per-connection server daemon (147.75.109.163:46582). Nov 8 00:32:00.994985 sshd[5439]: Accepted publickey for core from 147.75.109.163 port 46582 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:00.998457 sshd[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:01.009947 systemd-logind[1486]: New session 8 of user core. Nov 8 00:32:01.014545 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:32:02.241030 sshd[5439]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:02.248833 systemd-logind[1486]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:32:02.250483 systemd[1]: sshd@8-157.180.31.220:22-147.75.109.163:46582.service: Deactivated successfully. Nov 8 00:32:02.252782 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:32:02.256652 systemd-logind[1486]: Removed session 8. Nov 8 00:32:02.642699 kubelet[2545]: E1108 00:32:02.642580 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:32:04.642874 kubelet[2545]: E1108 00:32:04.642802 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:32:07.454501 systemd[1]: Started sshd@9-157.180.31.220:22-147.75.109.163:54120.service - OpenSSH per-connection server daemon (147.75.109.163:54120). Nov 8 00:32:07.643160 kubelet[2545]: E1108 00:32:07.643116 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:32:07.644732 kubelet[2545]: E1108 00:32:07.644619 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:32:08.592169 sshd[5478]: Accepted publickey for core from 147.75.109.163 port 54120 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:08.593437 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:08.598233 systemd-logind[1486]: New session 9 of user core. Nov 8 00:32:08.605447 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:32:09.513394 sshd[5478]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:09.518694 systemd-logind[1486]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:32:09.520223 systemd[1]: sshd@9-157.180.31.220:22-147.75.109.163:54120.service: Deactivated successfully. Nov 8 00:32:09.523088 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:32:09.525961 systemd-logind[1486]: Removed session 9. Nov 8 00:32:09.643259 kubelet[2545]: E1108 00:32:09.643218 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:32:09.676470 systemd[1]: Started sshd@10-157.180.31.220:22-147.75.109.163:54136.service - OpenSSH per-connection server daemon (147.75.109.163:54136). Nov 8 00:32:10.671554 sshd[5492]: Accepted publickey for core from 147.75.109.163 port 54136 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:10.675476 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:10.689107 systemd-logind[1486]: New session 10 of user core. Nov 8 00:32:10.698390 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:32:11.511118 sshd[5492]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:11.515491 systemd-logind[1486]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:32:11.517147 systemd[1]: sshd@10-157.180.31.220:22-147.75.109.163:54136.service: Deactivated successfully. Nov 8 00:32:11.518912 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:32:11.520612 systemd-logind[1486]: Removed session 10. Nov 8 00:32:11.643590 kubelet[2545]: E1108 00:32:11.643545 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:32:11.683416 systemd[1]: Started sshd@11-157.180.31.220:22-147.75.109.163:52614.service - OpenSSH per-connection server daemon (147.75.109.163:52614). Nov 8 00:32:12.706128 sshd[5504]: Accepted publickey for core from 147.75.109.163 port 52614 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:12.708350 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:12.715046 systemd-logind[1486]: New session 11 of user core. Nov 8 00:32:12.719539 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:32:13.525823 sshd[5504]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:13.529302 systemd[1]: sshd@11-157.180.31.220:22-147.75.109.163:52614.service: Deactivated successfully. Nov 8 00:32:13.533000 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:32:13.534368 systemd-logind[1486]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:32:13.535207 systemd-logind[1486]: Removed session 11. Nov 8 00:32:14.651377 kubelet[2545]: E1108 00:32:14.651318 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:32:16.641449 kubelet[2545]: E1108 00:32:16.641348 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:32:18.642551 kubelet[2545]: E1108 00:32:18.642329 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:32:18.692512 systemd[1]: Started sshd@12-157.180.31.220:22-147.75.109.163:52624.service - OpenSSH per-connection server daemon (147.75.109.163:52624). Nov 8 00:32:19.643003 kubelet[2545]: E1108 00:32:19.642939 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:32:19.691981 sshd[5523]: Accepted publickey for core from 147.75.109.163 port 52624 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:19.692588 sshd[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:19.697086 systemd-logind[1486]: New session 12 of user core. Nov 8 00:32:19.701541 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:32:20.461525 sshd[5523]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:20.466006 systemd[1]: sshd@12-157.180.31.220:22-147.75.109.163:52624.service: Deactivated successfully. Nov 8 00:32:20.466422 systemd-logind[1486]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:32:20.470738 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:32:20.475997 systemd-logind[1486]: Removed session 12. Nov 8 00:32:23.641380 kubelet[2545]: E1108 00:32:23.640812 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:32:24.641168 kubelet[2545]: E1108 00:32:24.641123 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:32:25.636803 systemd[1]: Started sshd@13-157.180.31.220:22-147.75.109.163:35306.service - OpenSSH per-connection server daemon (147.75.109.163:35306). Nov 8 00:32:25.654955 kubelet[2545]: E1108 00:32:25.654914 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:32:26.632434 sshd[5536]: Accepted publickey for core from 147.75.109.163 port 35306 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:26.633680 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:26.638585 systemd-logind[1486]: New session 13 of user core. Nov 8 00:32:26.641456 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:32:27.389833 sshd[5536]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:27.393341 systemd[1]: sshd@13-157.180.31.220:22-147.75.109.163:35306.service: Deactivated successfully. Nov 8 00:32:27.394889 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:32:27.397292 systemd-logind[1486]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:32:27.398845 systemd-logind[1486]: Removed session 13. Nov 8 00:32:27.644161 kubelet[2545]: E1108 00:32:27.643536 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:32:31.642155 kubelet[2545]: E1108 00:32:31.641874 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:32:32.562893 systemd[1]: Started sshd@14-157.180.31.220:22-147.75.109.163:55328.service - OpenSSH per-connection server daemon (147.75.109.163:55328). Nov 8 00:32:33.581023 sshd[5551]: Accepted publickey for core from 147.75.109.163 port 55328 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:33.581585 sshd[5551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:33.587486 systemd-logind[1486]: New session 14 of user core. Nov 8 00:32:33.591359 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:32:34.412926 sshd[5551]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:34.417364 systemd[1]: sshd@14-157.180.31.220:22-147.75.109.163:55328.service: Deactivated successfully. Nov 8 00:32:34.419320 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:32:34.420648 systemd-logind[1486]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:32:34.422244 systemd-logind[1486]: Removed session 14. Nov 8 00:32:34.626645 systemd[1]: Started sshd@15-157.180.31.220:22-147.75.109.163:55342.service - OpenSSH per-connection server daemon (147.75.109.163:55342). Nov 8 00:32:34.650530 containerd[1500]: time="2025-11-08T00:32:34.650497352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:35.090330 containerd[1500]: time="2025-11-08T00:32:35.090287560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:35.091659 containerd[1500]: time="2025-11-08T00:32:35.091625835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:35.091754 containerd[1500]: time="2025-11-08T00:32:35.091722976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:35.091949 kubelet[2545]: E1108 00:32:35.091913 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:35.092197 kubelet[2545]: E1108 00:32:35.091958 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:35.092197 kubelet[2545]: E1108 00:32:35.092043 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-r5lxh_calico-apiserver(353c4d02-7f56-4df1-98e1-7b89eab13038): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:35.092197 kubelet[2545]: E1108 00:32:35.092071 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:32:35.755214 sshd[5588]: Accepted publickey for core from 147.75.109.163 port 55342 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:35.756698 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:35.763324 systemd-logind[1486]: New session 15 of user core. Nov 8 00:32:35.766886 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:32:36.641934 containerd[1500]: time="2025-11-08T00:32:36.641832928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:32:36.917877 sshd[5588]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:36.925479 systemd[1]: sshd@15-157.180.31.220:22-147.75.109.163:55342.service: Deactivated successfully. Nov 8 00:32:36.929184 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:32:36.931589 systemd-logind[1486]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:32:36.933928 systemd-logind[1486]: Removed session 15. Nov 8 00:32:37.105566 systemd[1]: Started sshd@16-157.180.31.220:22-147.75.109.163:55356.service - OpenSSH per-connection server daemon (147.75.109.163:55356). Nov 8 00:32:37.128844 containerd[1500]: time="2025-11-08T00:32:37.128759766Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:37.130446 containerd[1500]: time="2025-11-08T00:32:37.130357046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:32:37.130512 containerd[1500]: time="2025-11-08T00:32:37.130439681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:37.130692 kubelet[2545]: E1108 00:32:37.130654 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:37.130932 kubelet[2545]: E1108 00:32:37.130701 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:32:37.130932 kubelet[2545]: E1108 00:32:37.130802 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-b7584974-6v6qw_calico-system(23fa7156-ab47-44e8-be85-07831bed27aa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:37.134389 kubelet[2545]: E1108 00:32:37.130833 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:32:37.642595 containerd[1500]: time="2025-11-08T00:32:37.642538834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:32:38.079043 containerd[1500]: time="2025-11-08T00:32:38.078846372Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:38.082282 containerd[1500]: time="2025-11-08T00:32:38.080747548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:32:38.082282 containerd[1500]: time="2025-11-08T00:32:38.080837025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:32:38.082498 kubelet[2545]: E1108 00:32:38.081038 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:38.082498 kubelet[2545]: E1108 00:32:38.081082 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:32:38.082498 kubelet[2545]: E1108 00:32:38.081151 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:38.083432 containerd[1500]: time="2025-11-08T00:32:38.083399175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:32:38.236823 sshd[5609]: Accepted publickey for core from 147.75.109.163 port 55356 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:38.239304 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:38.246342 systemd-logind[1486]: New session 16 of user core. Nov 8 00:32:38.251534 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:32:38.511627 containerd[1500]: time="2025-11-08T00:32:38.511517261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:38.514264 containerd[1500]: time="2025-11-08T00:32:38.512713743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:32:38.514264 containerd[1500]: time="2025-11-08T00:32:38.512830211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:32:38.514375 kubelet[2545]: E1108 00:32:38.513059 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:38.514375 kubelet[2545]: E1108 00:32:38.513107 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:32:38.514375 kubelet[2545]: E1108 00:32:38.513173 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-6fcd4bdfbb-9mv84_calico-system(13800756-7bce-44ba-ac46-4639ec34a694): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:38.515199 kubelet[2545]: E1108 00:32:38.513209 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:32:39.641163 containerd[1500]: time="2025-11-08T00:32:39.641111375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:32:39.744339 sshd[5609]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:39.748752 systemd[1]: sshd@16-157.180.31.220:22-147.75.109.163:55356.service: Deactivated successfully. Nov 8 00:32:39.756235 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:32:39.759767 systemd-logind[1486]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:32:39.762028 systemd-logind[1486]: Removed session 16. Nov 8 00:32:39.898657 systemd[1]: Started sshd@17-157.180.31.220:22-147.75.109.163:55360.service - OpenSSH per-connection server daemon (147.75.109.163:55360). Nov 8 00:32:40.079476 containerd[1500]: time="2025-11-08T00:32:40.079419887Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:40.082452 containerd[1500]: time="2025-11-08T00:32:40.082316522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:32:40.082452 containerd[1500]: time="2025-11-08T00:32:40.082406109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:40.082694 kubelet[2545]: E1108 00:32:40.082637 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:40.083073 kubelet[2545]: E1108 00:32:40.082701 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:32:40.083073 kubelet[2545]: E1108 00:32:40.082791 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-69548547f7-zwltf_calico-apiserver(e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:40.083073 kubelet[2545]: E1108 00:32:40.082838 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:32:40.645311 containerd[1500]: time="2025-11-08T00:32:40.645225178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:32:40.932143 sshd[5626]: Accepted publickey for core from 147.75.109.163 port 55360 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:40.933718 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:40.942468 systemd-logind[1486]: New session 17 of user core. Nov 8 00:32:40.947447 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:32:41.086359 containerd[1500]: time="2025-11-08T00:32:41.086305165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:41.087709 containerd[1500]: time="2025-11-08T00:32:41.087512648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:32:41.087709 containerd[1500]: time="2025-11-08T00:32:41.087607826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:32:41.088133 kubelet[2545]: E1108 00:32:41.087906 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:41.088897 kubelet[2545]: E1108 00:32:41.088499 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:32:41.088897 kubelet[2545]: E1108 00:32:41.088600 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-xpcnb_calico-system(fa6c771d-e186-4cd9-a6e0-552ae2873655): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:41.088897 kubelet[2545]: E1108 00:32:41.088631 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:32:41.921075 sshd[5626]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:41.924531 systemd[1]: sshd@17-157.180.31.220:22-147.75.109.163:55360.service: Deactivated successfully. Nov 8 00:32:41.926738 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:32:41.927871 systemd-logind[1486]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:32:41.930003 systemd-logind[1486]: Removed session 17. Nov 8 00:32:42.092311 systemd[1]: Started sshd@18-157.180.31.220:22-147.75.109.163:58592.service - OpenSSH per-connection server daemon (147.75.109.163:58592). Nov 8 00:32:43.094723 sshd[5640]: Accepted publickey for core from 147.75.109.163 port 58592 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:43.097738 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:43.103119 systemd-logind[1486]: New session 18 of user core. Nov 8 00:32:43.106365 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:32:43.961721 sshd[5640]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:43.965775 systemd[1]: sshd@18-157.180.31.220:22-147.75.109.163:58592.service: Deactivated successfully. Nov 8 00:32:43.967533 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:32:43.968650 systemd-logind[1486]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:32:43.969944 systemd-logind[1486]: Removed session 18. Nov 8 00:32:44.641230 containerd[1500]: time="2025-11-08T00:32:44.641193707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:32:45.099231 containerd[1500]: time="2025-11-08T00:32:45.099108170Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:45.100732 containerd[1500]: time="2025-11-08T00:32:45.100550824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:32:45.100732 containerd[1500]: time="2025-11-08T00:32:45.100617259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:32:45.100826 kubelet[2545]: E1108 00:32:45.100742 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:45.100826 kubelet[2545]: E1108 00:32:45.100780 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:32:45.101334 kubelet[2545]: E1108 00:32:45.100854 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:45.102711 containerd[1500]: time="2025-11-08T00:32:45.102331860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:32:45.700352 containerd[1500]: time="2025-11-08T00:32:45.700304391Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:32:45.703283 containerd[1500]: time="2025-11-08T00:32:45.702974878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:32:45.703283 containerd[1500]: time="2025-11-08T00:32:45.703024551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:32:45.703686 kubelet[2545]: E1108 00:32:45.703198 2545 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:45.703686 kubelet[2545]: E1108 00:32:45.703361 2545 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:32:45.703686 kubelet[2545]: E1108 00:32:45.703607 2545 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-4mxdr_calico-system(f0c2bf49-2c83-4e41-9990-a77826efb954): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:32:45.704181 kubelet[2545]: E1108 00:32:45.703954 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:32:46.641005 kubelet[2545]: E1108 00:32:46.640946 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:32:49.134386 systemd[1]: Started sshd@19-157.180.31.220:22-147.75.109.163:58606.service - OpenSSH per-connection server daemon (147.75.109.163:58606). Nov 8 00:32:49.645892 kubelet[2545]: E1108 00:32:49.645834 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:32:49.657409 kubelet[2545]: E1108 00:32:49.657318 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:32:50.184286 sshd[5655]: Accepted publickey for core from 147.75.109.163 port 58606 ssh2: RSA SHA256:OlzoI32JgcpjQ1LH303EkMKY9qIGtPKb42SZOMj04EQ Nov 8 00:32:50.185914 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:32:50.192527 systemd-logind[1486]: New session 19 of user core. Nov 8 00:32:50.197500 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:32:50.995592 sshd[5655]: pam_unix(sshd:session): session closed for user core Nov 8 00:32:50.998882 systemd[1]: sshd@19-157.180.31.220:22-147.75.109.163:58606.service: Deactivated successfully. Nov 8 00:32:51.001390 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:32:51.005110 systemd-logind[1486]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:32:51.006310 systemd-logind[1486]: Removed session 19. Nov 8 00:32:51.643306 kubelet[2545]: E1108 00:32:51.641908 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:32:53.642716 kubelet[2545]: E1108 00:32:53.641897 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:32:58.641420 kubelet[2545]: E1108 00:32:58.641317 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:33:01.642083 kubelet[2545]: E1108 00:33:01.641997 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-r5lxh" podUID="353c4d02-7f56-4df1-98e1-7b89eab13038" Nov 8 00:33:01.642878 kubelet[2545]: E1108 00:33:01.642175 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b7584974-6v6qw" podUID="23fa7156-ab47-44e8-be85-07831bed27aa" Nov 8 00:33:01.643539 kubelet[2545]: E1108 00:33:01.643377 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6fcd4bdfbb-9mv84" podUID="13800756-7bce-44ba-ac46-4639ec34a694" Nov 8 00:33:03.643513 kubelet[2545]: E1108 00:33:03.643455 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-69548547f7-zwltf" podUID="e0024d9c-a1f5-4e59-abcc-d8ad3577f9a2" Nov 8 00:33:06.386799 kubelet[2545]: E1108 00:33:06.386732 2545 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52674->10.0.0.2:2379: read: connection timed out" Nov 8 00:33:06.607588 systemd[1]: cri-containerd-1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0.scope: Deactivated successfully. Nov 8 00:33:06.608732 systemd[1]: cri-containerd-1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0.scope: Consumed 3.852s CPU time, 20.1M memory peak, 0B memory swap peak. Nov 8 00:33:06.641567 kubelet[2545]: E1108 00:33:06.641403 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-xpcnb" podUID="fa6c771d-e186-4cd9-a6e0-552ae2873655" Nov 8 00:33:06.695897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0-rootfs.mount: Deactivated successfully. Nov 8 00:33:06.717098 containerd[1500]: time="2025-11-08T00:33:06.702590688Z" level=info msg="shim disconnected" id=1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0 namespace=k8s.io Nov 8 00:33:06.724016 containerd[1500]: time="2025-11-08T00:33:06.723981841Z" level=warning msg="cleaning up after shim disconnected" id=1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0 namespace=k8s.io Nov 8 00:33:06.724016 containerd[1500]: time="2025-11-08T00:33:06.724004984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:07.207409 systemd[1]: cri-containerd-cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6.scope: Deactivated successfully. Nov 8 00:33:07.208898 systemd[1]: cri-containerd-cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6.scope: Consumed 17.463s CPU time. Nov 8 00:33:07.227852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6-rootfs.mount: Deactivated successfully. Nov 8 00:33:07.233638 containerd[1500]: time="2025-11-08T00:33:07.233457777Z" level=info msg="shim disconnected" id=cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6 namespace=k8s.io Nov 8 00:33:07.233638 containerd[1500]: time="2025-11-08T00:33:07.233519803Z" level=warning msg="cleaning up after shim disconnected" id=cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6 namespace=k8s.io Nov 8 00:33:07.233638 containerd[1500]: time="2025-11-08T00:33:07.233528608Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:07.363607 kubelet[2545]: I1108 00:33:07.363567 2545 scope.go:117] "RemoveContainer" containerID="cb2b83b19f18b2c0c153febf07074bfa296332b874ecdb7fbf5404cfd0b6f5c6" Nov 8 00:33:07.363884 kubelet[2545]: I1108 00:33:07.363856 2545 scope.go:117] "RemoveContainer" containerID="1b820a4e0ae8e5e92562364be449cedf396164c76f0c426d88af66db0cb21ff0" Nov 8 00:33:07.404435 containerd[1500]: time="2025-11-08T00:33:07.404292394Z" level=info msg="CreateContainer within sandbox \"d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:33:07.411309 containerd[1500]: time="2025-11-08T00:33:07.411273142Z" level=info msg="CreateContainer within sandbox \"03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:33:07.477825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount159881727.mount: Deactivated successfully. Nov 8 00:33:07.487910 containerd[1500]: time="2025-11-08T00:33:07.487881201Z" level=info msg="CreateContainer within sandbox \"d511180c6ee8ee4150691b376d40849d06d170544f8195eec52be889b3134a7c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0d69836c6472d9d45a950f6aefe63a7fbe801961a8a1be484c50fe0ae8fe38fe\"" Nov 8 00:33:07.489805 containerd[1500]: time="2025-11-08T00:33:07.489745989Z" level=info msg="CreateContainer within sandbox \"03859b7c62830457ea00ad198af6bad8a149e9b98b7a0e68957cf2c70dabc6ce\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"072ba9cd3bc45e3e06cc6705bb1850755b17b95e239f53d5ca78434a8151c007\"" Nov 8 00:33:07.491089 containerd[1500]: time="2025-11-08T00:33:07.491048656Z" level=info msg="StartContainer for \"0d69836c6472d9d45a950f6aefe63a7fbe801961a8a1be484c50fe0ae8fe38fe\"" Nov 8 00:33:07.491900 containerd[1500]: time="2025-11-08T00:33:07.491048517Z" level=info msg="StartContainer for \"072ba9cd3bc45e3e06cc6705bb1850755b17b95e239f53d5ca78434a8151c007\"" Nov 8 00:33:07.522659 systemd[1]: Started cri-containerd-072ba9cd3bc45e3e06cc6705bb1850755b17b95e239f53d5ca78434a8151c007.scope - libcontainer container 072ba9cd3bc45e3e06cc6705bb1850755b17b95e239f53d5ca78434a8151c007. Nov 8 00:33:07.530504 systemd[1]: Started cri-containerd-0d69836c6472d9d45a950f6aefe63a7fbe801961a8a1be484c50fe0ae8fe38fe.scope - libcontainer container 0d69836c6472d9d45a950f6aefe63a7fbe801961a8a1be484c50fe0ae8fe38fe. Nov 8 00:33:07.555797 containerd[1500]: time="2025-11-08T00:33:07.555709637Z" level=info msg="StartContainer for \"072ba9cd3bc45e3e06cc6705bb1850755b17b95e239f53d5ca78434a8151c007\" returns successfully" Nov 8 00:33:07.581664 containerd[1500]: time="2025-11-08T00:33:07.581550466Z" level=info msg="StartContainer for \"0d69836c6472d9d45a950f6aefe63a7fbe801961a8a1be484c50fe0ae8fe38fe\" returns successfully" Nov 8 00:33:07.992485 kubelet[2545]: E1108 00:33:07.987519 2545 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52330->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-6ee8ddef06.1875e0c7b4ee93d3 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-6ee8ddef06,UID:df19481ededd2bc80170a80d96b1ee36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-6ee8ddef06,},FirstTimestamp:2025-11-08 00:32:57.524065235 +0000 UTC m=+152.018567163,LastTimestamp:2025-11-08 00:32:57.524065235 +0000 UTC m=+152.018567163,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-6ee8ddef06,}" Nov 8 00:33:10.641203 kubelet[2545]: E1108 00:33:10.641144 2545 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mxdr" podUID="f0c2bf49-2c83-4e41-9990-a77826efb954" Nov 8 00:33:10.896558 systemd[1]: cri-containerd-92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e.scope: Deactivated successfully. Nov 8 00:33:10.897771 systemd[1]: cri-containerd-92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e.scope: Consumed 2.506s CPU time, 19.0M memory peak, 0B memory swap peak. Nov 8 00:33:10.921996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e-rootfs.mount: Deactivated successfully. Nov 8 00:33:10.934526 containerd[1500]: time="2025-11-08T00:33:10.934463712Z" level=info msg="shim disconnected" id=92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e namespace=k8s.io Nov 8 00:33:10.934526 containerd[1500]: time="2025-11-08T00:33:10.934514707Z" level=warning msg="cleaning up after shim disconnected" id=92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e namespace=k8s.io Nov 8 00:33:10.934526 containerd[1500]: time="2025-11-08T00:33:10.934523484Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:33:11.373202 kubelet[2545]: I1108 00:33:11.373163 2545 scope.go:117] "RemoveContainer" containerID="92d42b9e069215bd4aed3cd4638c5d9c441d8b758ad7f82d717b2943b714e60e" Nov 8 00:33:11.375123 containerd[1500]: time="2025-11-08T00:33:11.375094209Z" level=info msg="CreateContainer within sandbox \"3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:33:11.402971 containerd[1500]: time="2025-11-08T00:33:11.402845875Z" level=info msg="CreateContainer within sandbox \"3bc0962c00cd6e74b1a73f60a4237d17bb9eb9871e309e4ff16a38f650a87ba6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d601457a04ff7010f1e5f51bfe580ad5594a23ca09d8e124acaab579a659b6f0\"" Nov 8 00:33:11.404889 containerd[1500]: time="2025-11-08T00:33:11.404822273Z" level=info msg="StartContainer for \"d601457a04ff7010f1e5f51bfe580ad5594a23ca09d8e124acaab579a659b6f0\"" Nov 8 00:33:11.454598 systemd[1]: Started cri-containerd-d601457a04ff7010f1e5f51bfe580ad5594a23ca09d8e124acaab579a659b6f0.scope - libcontainer container d601457a04ff7010f1e5f51bfe580ad5594a23ca09d8e124acaab579a659b6f0. Nov 8 00:33:11.519181 containerd[1500]: time="2025-11-08T00:33:11.519150048Z" level=info msg="StartContainer for \"d601457a04ff7010f1e5f51bfe580ad5594a23ca09d8e124acaab579a659b6f0\" returns successfully"