Nov 1 00:21:42.869651 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:21:42.869674 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:42.869682 kernel: BIOS-provided physical RAM map: Nov 1 00:21:42.869687 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:21:42.869692 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:21:42.869698 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:21:42.869704 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 1 00:21:42.869709 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 1 00:21:42.869716 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:21:42.869721 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:21:42.869727 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:21:42.869732 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:21:42.869737 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:21:42.869743 kernel: NX (Execute Disable) protection: active Nov 1 00:21:42.869751 kernel: APIC: Static calls initialized Nov 1 00:21:42.869757 kernel: SMBIOS 3.0.0 present. Nov 1 00:21:42.869763 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 1 00:21:42.869769 kernel: Hypervisor detected: KVM Nov 1 00:21:42.869775 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:21:42.869781 kernel: kvm-clock: using sched offset of 3096177124 cycles Nov 1 00:21:42.869787 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:21:42.869794 kernel: tsc: Detected 2445.404 MHz processor Nov 1 00:21:42.869800 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:21:42.869808 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:21:42.869814 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 1 00:21:42.869820 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:21:42.869826 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:21:42.869831 kernel: Using GB pages for direct mapping Nov 1 00:21:42.869837 kernel: ACPI: Early table checksum verification disabled Nov 1 00:21:42.869843 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 1 00:21:42.869849 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869855 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869863 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869869 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 1 00:21:42.869875 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869904 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869913 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869919 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:42.869927 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 1 00:21:42.869938 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 1 00:21:42.869955 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 1 00:21:42.869962 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 1 00:21:42.869968 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 1 00:21:42.869977 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 1 00:21:42.869987 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 1 00:21:42.869997 kernel: No NUMA configuration found Nov 1 00:21:42.870008 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 1 00:21:42.870014 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Nov 1 00:21:42.870021 kernel: Zone ranges: Nov 1 00:21:42.870027 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:21:42.870033 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 1 00:21:42.870039 kernel: Normal empty Nov 1 00:21:42.870044 kernel: Movable zone start for each node Nov 1 00:21:42.870049 kernel: Early memory node ranges Nov 1 00:21:42.870054 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:21:42.870060 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 1 00:21:42.870066 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 1 00:21:42.870076 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:21:42.870083 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:21:42.870089 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 1 00:21:42.870094 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:21:42.870099 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:21:42.870105 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:21:42.870110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:21:42.870115 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:21:42.870122 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:21:42.870711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:21:42.870723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:21:42.873146 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:21:42.873161 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:21:42.873172 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:21:42.873178 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:21:42.873183 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:21:42.873189 kernel: Booting paravirtualized kernel on KVM Nov 1 00:21:42.873198 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:21:42.873203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:21:42.873209 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:21:42.873214 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:21:42.873220 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:21:42.873225 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 1 00:21:42.873232 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:42.873241 kernel: random: crng init done Nov 1 00:21:42.873251 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:21:42.873257 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:21:42.873263 kernel: Fallback order for Node 0: 0 Nov 1 00:21:42.873268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Nov 1 00:21:42.873273 kernel: Policy zone: DMA32 Nov 1 00:21:42.873279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:21:42.873285 kernel: Memory: 1922052K/2047464K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 125152K reserved, 0K cma-reserved) Nov 1 00:21:42.873290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:21:42.873296 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:21:42.873302 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:21:42.873308 kernel: Dynamic Preempt: voluntary Nov 1 00:21:42.873313 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:21:42.873326 kernel: rcu: RCU event tracing is enabled. Nov 1 00:21:42.873334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:21:42.873339 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:21:42.873345 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:21:42.873350 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:21:42.873356 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:21:42.873361 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:21:42.873369 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:21:42.873374 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:21:42.873379 kernel: Console: colour VGA+ 80x25 Nov 1 00:21:42.873385 kernel: printk: console [tty0] enabled Nov 1 00:21:42.873390 kernel: printk: console [ttyS0] enabled Nov 1 00:21:42.873399 kernel: ACPI: Core revision 20230628 Nov 1 00:21:42.873409 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:21:42.873414 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:21:42.873420 kernel: x2apic enabled Nov 1 00:21:42.873427 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:21:42.873432 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:21:42.873438 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 1 00:21:42.873444 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Nov 1 00:21:42.873449 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:21:42.873455 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:21:42.873460 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:21:42.873466 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:21:42.873484 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:21:42.873490 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:21:42.873496 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:21:42.873503 kernel: active return thunk: retbleed_return_thunk Nov 1 00:21:42.873509 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:21:42.873515 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:21:42.873521 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:21:42.873527 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:21:42.873534 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:21:42.873540 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:21:42.873546 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:21:42.873555 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:21:42.873565 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:21:42.873572 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:21:42.873577 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:21:42.873583 kernel: landlock: Up and running. Nov 1 00:21:42.873589 kernel: SELinux: Initializing. Nov 1 00:21:42.873596 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:21:42.873602 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:21:42.873607 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:21:42.873613 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:42.873619 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:42.873625 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:21:42.873633 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:21:42.873643 kernel: ... version: 0 Nov 1 00:21:42.873649 kernel: ... bit width: 48 Nov 1 00:21:42.873656 kernel: ... generic registers: 6 Nov 1 00:21:42.873662 kernel: ... value mask: 0000ffffffffffff Nov 1 00:21:42.873668 kernel: ... max period: 00007fffffffffff Nov 1 00:21:42.873673 kernel: ... fixed-purpose events: 0 Nov 1 00:21:42.873679 kernel: ... event mask: 000000000000003f Nov 1 00:21:42.873684 kernel: signal: max sigframe size: 1776 Nov 1 00:21:42.873690 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:21:42.873696 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:21:42.873701 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:21:42.873712 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:21:42.873722 kernel: .... node #0, CPUs: #1 Nov 1 00:21:42.873729 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:21:42.873735 kernel: smpboot: Max logical packages: 1 Nov 1 00:21:42.873741 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Nov 1 00:21:42.873747 kernel: devtmpfs: initialized Nov 1 00:21:42.873752 kernel: x86/mm: Memory block size: 128MB Nov 1 00:21:42.873758 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:21:42.873764 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:21:42.873771 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:21:42.873777 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:21:42.873782 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:21:42.873788 kernel: audit: type=2000 audit(1761956501.530:1): state=initialized audit_enabled=0 res=1 Nov 1 00:21:42.873794 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:21:42.873799 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:21:42.873806 kernel: cpuidle: using governor menu Nov 1 00:21:42.873811 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:21:42.873817 kernel: dca service started, version 1.12.1 Nov 1 00:21:42.873823 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 00:21:42.873830 kernel: PCI: Using configuration type 1 for base access Nov 1 00:21:42.873835 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:21:42.873841 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:21:42.873847 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:21:42.873852 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:21:42.873858 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:21:42.873864 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:21:42.873869 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:21:42.873876 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:21:42.873897 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:21:42.873903 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:21:42.873909 kernel: ACPI: Interpreter enabled Nov 1 00:21:42.873915 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:21:42.873920 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:21:42.873926 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:21:42.873932 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:21:42.873937 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:21:42.873945 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:21:42.874073 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:21:42.875204 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:21:42.875282 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:21:42.875291 kernel: PCI host bridge to bus 0000:00 Nov 1 00:21:42.875376 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:21:42.875437 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:21:42.875498 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:21:42.875552 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 1 00:21:42.875605 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:21:42.875658 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 1 00:21:42.875712 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:21:42.875827 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 00:21:42.875923 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Nov 1 00:21:42.875995 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Nov 1 00:21:42.876058 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Nov 1 00:21:42.876120 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Nov 1 00:21:42.876304 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Nov 1 00:21:42.876370 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:21:42.876444 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.876514 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Nov 1 00:21:42.876586 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.876647 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Nov 1 00:21:42.876714 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.876795 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Nov 1 00:21:42.876875 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.876980 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Nov 1 00:21:42.877073 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.878962 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Nov 1 00:21:42.879050 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.879118 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Nov 1 00:21:42.879222 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.879295 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Nov 1 00:21:42.879364 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.879425 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Nov 1 00:21:42.879495 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 1 00:21:42.879557 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Nov 1 00:21:42.879624 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 00:21:42.879690 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:21:42.879759 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 00:21:42.879822 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Nov 1 00:21:42.879920 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Nov 1 00:21:42.879995 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 00:21:42.880059 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 00:21:42.880148 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 1 00:21:42.880222 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Nov 1 00:21:42.880286 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 00:21:42.880348 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Nov 1 00:21:42.880410 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 1 00:21:42.880471 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:21:42.880531 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:21:42.880600 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 00:21:42.880689 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Nov 1 00:21:42.880770 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 1 00:21:42.880841 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:21:42.880928 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:21:42.881003 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 1 00:21:42.882544 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Nov 1 00:21:42.882626 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 1 00:21:42.882692 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 1 00:21:42.882776 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:21:42.882843 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:21:42.882959 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 1 00:21:42.883035 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 00:21:42.883107 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 1 00:21:42.883197 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:21:42.883272 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:21:42.883373 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 00:21:42.883442 kernel: pci 0000:05:00.0: reg 0x14: [mem 0xfe000000-0xfe000fff] Nov 1 00:21:42.883506 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Nov 1 00:21:42.883569 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 1 00:21:42.883630 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:21:42.883690 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:21:42.883770 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 1 00:21:42.883836 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Nov 1 00:21:42.883919 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Nov 1 00:21:42.883983 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 1 00:21:42.884044 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:21:42.884103 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:21:42.884111 kernel: acpiphp: Slot [0] registered Nov 1 00:21:42.886192 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 1 00:21:42.886328 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Nov 1 00:21:42.886443 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Nov 1 00:21:42.886566 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Nov 1 00:21:42.886687 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 1 00:21:42.886806 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:21:42.886947 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:21:42.886964 kernel: acpiphp: Slot [0-2] registered Nov 1 00:21:42.887087 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 1 00:21:42.888256 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 1 00:21:42.888333 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:21:42.888350 kernel: acpiphp: Slot [0-3] registered Nov 1 00:21:42.888420 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 1 00:21:42.888482 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:21:42.888543 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:21:42.888552 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:21:42.888558 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:21:42.888568 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:21:42.888574 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:21:42.888580 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:21:42.888586 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:21:42.888592 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:21:42.888597 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:21:42.888603 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:21:42.888608 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:21:42.888614 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:21:42.888622 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:21:42.888628 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:21:42.888634 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:21:42.888639 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:21:42.888645 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:21:42.888651 kernel: iommu: Default domain type: Translated Nov 1 00:21:42.888657 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:21:42.888663 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:21:42.888668 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:21:42.888676 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:21:42.888682 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 1 00:21:42.888746 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:21:42.888834 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:21:42.888915 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:21:42.888925 kernel: vgaarb: loaded Nov 1 00:21:42.888932 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:21:42.888938 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:21:42.888947 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:21:42.888953 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:21:42.888962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:21:42.888974 kernel: pnp: PnP ACPI init Nov 1 00:21:42.889058 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:21:42.889069 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:21:42.889075 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:21:42.889081 kernel: NET: Registered PF_INET protocol family Nov 1 00:21:42.889087 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:21:42.889095 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:21:42.889101 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:21:42.889107 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:21:42.889114 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:21:42.889119 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:21:42.890166 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:21:42.890176 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:21:42.890182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:21:42.890192 kernel: NET: Registered PF_XDP protocol family Nov 1 00:21:42.890272 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 00:21:42.890338 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 00:21:42.890400 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 00:21:42.890476 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 00:21:42.890561 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 00:21:42.890703 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 00:21:42.891195 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 1 00:21:42.891336 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:21:42.891404 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:21:42.891469 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 1 00:21:42.891531 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:21:42.891591 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:21:42.891671 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 1 00:21:42.891738 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:21:42.891833 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:21:42.891928 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 1 00:21:42.891992 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:21:42.892054 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:21:42.892117 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 1 00:21:42.894228 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:21:42.894326 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:21:42.894409 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 1 00:21:42.894522 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:21:42.894619 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:21:42.894724 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 1 00:21:42.894814 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 1 00:21:42.894905 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:21:42.895012 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:21:42.895081 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 1 00:21:42.895329 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 1 00:21:42.895459 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 1 00:21:42.895553 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:21:42.895678 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 1 00:21:42.895769 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 1 00:21:42.895864 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:21:42.896007 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:21:42.896097 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:21:42.896197 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:21:42.896309 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:21:42.896376 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 1 00:21:42.896477 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:21:42.896537 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 1 00:21:42.896642 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 00:21:42.896706 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 1 00:21:42.896792 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 00:21:42.896861 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:21:42.896948 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 00:21:42.897015 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:21:42.899176 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 00:21:42.899262 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:21:42.899364 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 00:21:42.899431 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:21:42.899496 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 00:21:42.899554 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:21:42.899652 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 1 00:21:42.899746 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 00:21:42.899842 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:21:42.899946 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 1 00:21:42.900061 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 1 00:21:42.901182 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:21:42.901301 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 1 00:21:42.901397 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 00:21:42.901487 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:21:42.901502 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:21:42.901512 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:21:42.901521 kernel: Initialise system trusted keyrings Nov 1 00:21:42.901531 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:21:42.901542 kernel: Key type asymmetric registered Nov 1 00:21:42.901554 kernel: Asymmetric key parser 'x509' registered Nov 1 00:21:42.901567 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:21:42.901576 kernel: io scheduler mq-deadline registered Nov 1 00:21:42.901585 kernel: io scheduler kyber registered Nov 1 00:21:42.901594 kernel: io scheduler bfq registered Nov 1 00:21:42.901697 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 00:21:42.901794 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 00:21:42.901909 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 00:21:42.902005 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 00:21:42.902102 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 00:21:42.903306 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 00:21:42.903407 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 00:21:42.903495 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 00:21:42.903591 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 00:21:42.903684 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 00:21:42.903773 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 00:21:42.903856 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 00:21:42.903974 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 00:21:42.904109 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 00:21:42.904215 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 00:21:42.904314 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 00:21:42.904329 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:21:42.904441 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 1 00:21:42.904568 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 1 00:21:42.904587 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:21:42.904599 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 1 00:21:42.904615 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:21:42.904627 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:21:42.904638 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:21:42.904649 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:21:42.904661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:21:42.904779 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:21:42.904792 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:21:42.904866 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:21:42.904981 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:21:42 UTC (1761956502) Nov 1 00:21:42.905078 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 1 00:21:42.905097 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:21:42.905109 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:21:42.905121 kernel: Segment Routing with IPv6 Nov 1 00:21:42.907165 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:21:42.907177 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:21:42.907188 kernel: Key type dns_resolver registered Nov 1 00:21:42.907200 kernel: IPI shorthand broadcast: enabled Nov 1 00:21:42.907212 kernel: sched_clock: Marking stable (1152124883, 138516738)->(1305084620, -14442999) Nov 1 00:21:42.907218 kernel: registered taskstats version 1 Nov 1 00:21:42.907224 kernel: Loading compiled-in X.509 certificates Nov 1 00:21:42.907231 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:21:42.907237 kernel: Key type .fscrypt registered Nov 1 00:21:42.907243 kernel: Key type fscrypt-provisioning registered Nov 1 00:21:42.907254 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:21:42.907266 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:21:42.907273 kernel: ima: No architecture policies found Nov 1 00:21:42.907281 kernel: clk: Disabling unused clocks Nov 1 00:21:42.907287 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:21:42.907294 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:21:42.907304 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:21:42.907315 kernel: Run /init as init process Nov 1 00:21:42.907321 kernel: with arguments: Nov 1 00:21:42.907328 kernel: /init Nov 1 00:21:42.907334 kernel: with environment: Nov 1 00:21:42.907340 kernel: HOME=/ Nov 1 00:21:42.907348 kernel: TERM=linux Nov 1 00:21:42.907358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:21:42.907366 systemd[1]: Detected virtualization kvm. Nov 1 00:21:42.907373 systemd[1]: Detected architecture x86-64. Nov 1 00:21:42.907380 systemd[1]: Running in initrd. Nov 1 00:21:42.907386 systemd[1]: No hostname configured, using default hostname. Nov 1 00:21:42.907392 systemd[1]: Hostname set to . Nov 1 00:21:42.907400 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:21:42.907408 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:21:42.907419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:42.907431 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:42.907445 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:21:42.907454 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:42.907461 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:21:42.907471 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:21:42.907489 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:21:42.907502 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:21:42.907512 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:42.907524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:42.907536 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:42.907548 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:42.907560 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:42.907572 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:42.907588 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:42.907601 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:42.907613 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:21:42.907626 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:21:42.907638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:42.907650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:42.907657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:42.907663 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:42.907674 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:21:42.907680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:42.907687 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:21:42.907693 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:21:42.907699 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:42.907706 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:42.907736 systemd-journald[187]: Collecting audit messages is disabled. Nov 1 00:21:42.907758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:42.907765 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:42.907772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:42.907779 systemd-journald[187]: Journal started Nov 1 00:21:42.907797 systemd-journald[187]: Runtime Journal (/run/log/journal/2a7f518c7da74c72874d68622a9ee067) is 4.8M, max 38.4M, 33.6M free. Nov 1 00:21:42.891344 systemd-modules-load[188]: Inserted module 'overlay' Nov 1 00:21:42.941002 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:21:42.941032 kernel: Bridge firewalling registered Nov 1 00:21:42.941048 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:42.913984 systemd-modules-load[188]: Inserted module 'br_netfilter' Nov 1 00:21:42.941832 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:21:42.942785 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:42.943966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:42.952336 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:42.955307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:42.956623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:42.962328 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:42.970377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:42.973248 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:21:42.982625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:42.992342 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:42.993187 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:42.993908 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:43.000791 dracut-cmdline[213]: dracut-dracut-053 Nov 1 00:21:43.004461 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:21:43.005294 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:43.008419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:43.034286 systemd-resolved[228]: Positive Trust Anchors: Nov 1 00:21:43.034987 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:43.035719 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:43.043241 systemd-resolved[228]: Defaulting to hostname 'linux'. Nov 1 00:21:43.044108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:43.044835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:43.075189 kernel: SCSI subsystem initialized Nov 1 00:21:43.083173 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:21:43.096185 kernel: iscsi: registered transport (tcp) Nov 1 00:21:43.114589 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:21:43.114669 kernel: QLogic iSCSI HBA Driver Nov 1 00:21:43.141824 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:43.149271 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:21:43.173332 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:21:43.173408 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:21:43.176155 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:21:43.215174 kernel: raid6: avx2x4 gen() 30215 MB/s Nov 1 00:21:43.233172 kernel: raid6: avx2x2 gen() 24263 MB/s Nov 1 00:21:43.250385 kernel: raid6: avx2x1 gen() 23425 MB/s Nov 1 00:21:43.250439 kernel: raid6: using algorithm avx2x4 gen() 30215 MB/s Nov 1 00:21:43.270538 kernel: raid6: .... xor() 4114 MB/s, rmw enabled Nov 1 00:21:43.270597 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:21:43.291175 kernel: xor: automatically using best checksumming function avx Nov 1 00:21:43.409168 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:21:43.417318 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:43.425296 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:43.436903 systemd-udevd[406]: Using default interface naming scheme 'v255'. Nov 1 00:21:43.440166 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:43.448302 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:21:43.459032 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Nov 1 00:21:43.485590 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:43.491261 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:43.534383 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:43.544319 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:21:43.558102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:43.560719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:43.562916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:43.563961 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:43.572315 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:21:43.584961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:43.613143 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:21:43.615166 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:21:43.615305 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 1 00:21:43.634842 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:43.634995 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:43.635791 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:43.636515 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:43.636604 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:43.639187 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:43.693406 kernel: ACPI: bus type USB registered Nov 1 00:21:43.693437 kernel: usbcore: registered new interface driver usbfs Nov 1 00:21:43.693446 kernel: usbcore: registered new interface driver hub Nov 1 00:21:43.693453 kernel: usbcore: registered new device driver usb Nov 1 00:21:43.693461 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:21:43.693468 kernel: AES CTR mode by8 optimization enabled Nov 1 00:21:43.693475 kernel: libata version 3.00 loaded. Nov 1 00:21:43.674840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:43.714191 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 1 00:21:43.714368 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 1 00:21:43.714457 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 00:21:43.717154 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:21:43.718179 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:21:43.722153 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 1 00:21:43.722275 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 1 00:21:43.722364 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 1 00:21:43.723516 kernel: hub 1-0:1.0: USB hub found Nov 1 00:21:43.723636 kernel: hub 1-0:1.0: 4 ports detected Nov 1 00:21:43.724828 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 00:21:43.730478 kernel: hub 2-0:1.0: USB hub found Nov 1 00:21:43.730582 kernel: hub 2-0:1.0: 4 ports detected Nov 1 00:21:43.730666 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 00:21:43.730750 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:21:43.732500 kernel: scsi host1: ahci Nov 1 00:21:43.737218 kernel: scsi host2: ahci Nov 1 00:21:43.739174 kernel: scsi host3: ahci Nov 1 00:21:43.739321 kernel: scsi host4: ahci Nov 1 00:21:43.739439 kernel: scsi host5: ahci Nov 1 00:21:43.740147 kernel: scsi host6: ahci Nov 1 00:21:43.740261 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 51 Nov 1 00:21:43.740271 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 51 Nov 1 00:21:43.740278 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 51 Nov 1 00:21:43.740285 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 51 Nov 1 00:21:43.740293 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 51 Nov 1 00:21:43.740300 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 51 Nov 1 00:21:43.785706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:43.792259 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:43.804255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:43.962167 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 00:21:44.046163 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:44.046246 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:44.055244 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:44.055285 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:21:44.056147 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:44.059159 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:44.059188 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:21:44.061546 kernel: ata1.00: applying bridge limits Nov 1 00:21:44.062784 kernel: ata1.00: configured for UDMA/100 Nov 1 00:21:44.063501 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:21:44.097285 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 1 00:21:44.099321 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 1 00:21:44.100827 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:21:44.101000 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 1 00:21:44.102143 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:21:44.102303 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:21:44.105283 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:21:44.108344 kernel: GPT:17805311 != 80003071 Nov 1 00:21:44.108366 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:21:44.110512 kernel: GPT:17805311 != 80003071 Nov 1 00:21:44.110529 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:21:44.112572 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:21:44.113480 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:21:44.122320 kernel: usbcore: registered new interface driver usbhid Nov 1 00:21:44.122348 kernel: usbhid: USB HID core driver Nov 1 00:21:44.130150 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Nov 1 00:21:44.130175 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:21:44.130302 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 1 00:21:44.130453 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:21:44.145150 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:21:44.147833 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 1 00:21:44.160394 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (463) Nov 1 00:21:44.160409 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (461) Nov 1 00:21:44.164790 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 1 00:21:44.169303 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 1 00:21:44.170623 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 1 00:21:44.176264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:21:44.182262 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:21:44.187347 disk-uuid[576]: Primary Header is updated. Nov 1 00:21:44.187347 disk-uuid[576]: Secondary Entries is updated. Nov 1 00:21:44.187347 disk-uuid[576]: Secondary Header is updated. Nov 1 00:21:44.192149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:21:44.199200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:21:44.205145 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:21:45.206226 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:21:45.207652 disk-uuid[577]: The operation has completed successfully. Nov 1 00:21:45.256720 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:21:45.256844 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:21:45.286367 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:21:45.289058 sh[598]: Success Nov 1 00:21:45.300366 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 1 00:21:45.347337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:21:45.354215 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:21:45.355880 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:21:45.372488 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:21:45.372541 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:45.374200 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:21:45.376883 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:21:45.376933 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:21:45.388144 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 1 00:21:45.389916 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:21:45.390973 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:21:45.396251 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:21:45.398334 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:21:45.415154 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:45.415194 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:45.418390 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:21:45.423157 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:21:45.423188 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:21:45.432721 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:21:45.435691 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:45.440294 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:21:45.447393 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:21:45.481267 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:45.489354 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:45.513258 systemd-networkd[779]: lo: Link UP Nov 1 00:21:45.513266 systemd-networkd[779]: lo: Gained carrier Nov 1 00:21:45.515112 systemd-networkd[779]: Enumeration completed Nov 1 00:21:45.515473 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:45.516575 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:45.516580 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:45.517540 systemd[1]: Reached target network.target - Network. Nov 1 00:21:45.518377 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:45.518383 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:45.519553 systemd-networkd[779]: eth0: Link UP Nov 1 00:21:45.519557 systemd-networkd[779]: eth0: Gained carrier Nov 1 00:21:45.519564 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:45.524821 systemd-networkd[779]: eth1: Link UP Nov 1 00:21:45.524824 systemd-networkd[779]: eth1: Gained carrier Nov 1 00:21:45.524832 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:45.538115 ignition[724]: Ignition 2.19.0 Nov 1 00:21:45.538144 ignition[724]: Stage: fetch-offline Nov 1 00:21:45.538175 ignition[724]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:45.538183 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:45.539873 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:45.538252 ignition[724]: parsed url from cmdline: "" Nov 1 00:21:45.538255 ignition[724]: no config URL provided Nov 1 00:21:45.538259 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:21:45.538265 ignition[724]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:21:45.538268 ignition[724]: failed to fetch config: resource requires networking Nov 1 00:21:45.538441 ignition[724]: Ignition finished successfully Nov 1 00:21:45.546480 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:21:45.548447 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 1 00:21:45.557743 ignition[786]: Ignition 2.19.0 Nov 1 00:21:45.557755 ignition[786]: Stage: fetch Nov 1 00:21:45.557937 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:45.557949 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:45.558020 ignition[786]: parsed url from cmdline: "" Nov 1 00:21:45.558023 ignition[786]: no config URL provided Nov 1 00:21:45.558027 ignition[786]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:21:45.558033 ignition[786]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:21:45.558051 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 1 00:21:45.558694 ignition[786]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 1 00:21:45.584203 systemd-networkd[779]: eth0: DHCPv4 address 46.62.149.99/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 1 00:21:45.758957 ignition[786]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 1 00:21:45.769145 ignition[786]: GET result: OK Nov 1 00:21:45.769218 ignition[786]: parsing config with SHA512: c92089abe67ae5b7351d2562bfbb7f468979a99e7621e7eedb4d6571207b465e913db6c1af784998bc133a39744bfdcbc75f184638e222d86e0285a58a0bbedd Nov 1 00:21:45.772781 unknown[786]: fetched base config from "system" Nov 1 00:21:45.772797 unknown[786]: fetched base config from "system" Nov 1 00:21:45.773155 ignition[786]: fetch: fetch complete Nov 1 00:21:45.772807 unknown[786]: fetched user config from "hetzner" Nov 1 00:21:45.773160 ignition[786]: fetch: fetch passed Nov 1 00:21:45.774808 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:21:45.773199 ignition[786]: Ignition finished successfully Nov 1 00:21:45.782279 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:21:45.797579 ignition[793]: Ignition 2.19.0 Nov 1 00:21:45.797592 ignition[793]: Stage: kargs Nov 1 00:21:45.797770 ignition[793]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:45.797779 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:45.798657 ignition[793]: kargs: kargs passed Nov 1 00:21:45.801768 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:21:45.798697 ignition[793]: Ignition finished successfully Nov 1 00:21:45.807263 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:21:45.819586 ignition[801]: Ignition 2.19.0 Nov 1 00:21:45.819604 ignition[801]: Stage: disks Nov 1 00:21:45.819820 ignition[801]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:45.826444 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:21:45.819831 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:45.828168 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:45.821100 ignition[801]: disks: disks passed Nov 1 00:21:45.830235 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:21:45.825211 ignition[801]: Ignition finished successfully Nov 1 00:21:45.830822 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:45.831432 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:45.832747 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:45.849406 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:21:45.863727 systemd-fsck[810]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 00:21:45.866948 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:21:45.880322 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:21:45.978158 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:21:45.977544 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:21:45.979354 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:21:45.986210 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:45.990244 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:21:45.992859 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:21:45.995796 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:21:45.995833 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:46.001487 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:21:46.005153 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (818) Nov 1 00:21:46.008440 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:46.011396 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:46.011436 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:21:46.014264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:21:46.021912 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:21:46.021960 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:21:46.026343 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:46.068771 coreos-metadata[820]: Nov 01 00:21:46.068 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 1 00:21:46.070360 coreos-metadata[820]: Nov 01 00:21:46.070 INFO Fetch successful Nov 1 00:21:46.071216 coreos-metadata[820]: Nov 01 00:21:46.071 INFO wrote hostname ci-4081-3-6-n-b21903d23a to /sysroot/etc/hostname Nov 1 00:21:46.074075 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:21:46.074236 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:46.080004 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:21:46.085145 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:21:46.088673 initrd-setup-root[867]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:21:46.190224 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:46.208366 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:21:46.211247 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:21:46.220229 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:46.245572 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:21:46.252029 ignition[935]: INFO : Ignition 2.19.0 Nov 1 00:21:46.252029 ignition[935]: INFO : Stage: mount Nov 1 00:21:46.253570 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:46.253570 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:46.253570 ignition[935]: INFO : mount: mount passed Nov 1 00:21:46.253570 ignition[935]: INFO : Ignition finished successfully Nov 1 00:21:46.255958 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:21:46.262260 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:21:46.371412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:21:46.376441 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:46.388197 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (946) Nov 1 00:21:46.391275 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:21:46.391325 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:46.393761 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:21:46.399752 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:21:46.399798 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:21:46.402772 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:46.421926 ignition[963]: INFO : Ignition 2.19.0 Nov 1 00:21:46.421926 ignition[963]: INFO : Stage: files Nov 1 00:21:46.423535 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:46.423535 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:46.425984 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:21:46.428010 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:21:46.428010 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:21:46.432494 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:21:46.433669 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:21:46.433669 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:21:46.432937 unknown[963]: wrote ssh authorized keys file for user: core Nov 1 00:21:46.436592 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:21:46.436592 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:21:46.436592 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:46.436592 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:21:46.542368 systemd-networkd[779]: eth1: Gained IPv6LL Nov 1 00:21:46.667337 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:46.979193 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:46.987350 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:21:47.182445 systemd-networkd[779]: eth0: Gained IPv6LL Nov 1 00:21:47.312930 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 00:21:47.578550 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:47.578550 ignition[963]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:47.581145 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:47.581145 ignition[963]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:47.581145 ignition[963]: INFO : files: files passed Nov 1 00:21:47.581145 ignition[963]: INFO : Ignition finished successfully Nov 1 00:21:47.582369 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:21:47.589307 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:21:47.595460 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:21:47.598194 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:21:47.598286 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:21:47.608650 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:47.608650 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:47.611469 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:47.612964 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:47.613703 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:21:47.629391 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:21:47.647045 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:21:47.647155 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:21:47.648566 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:21:47.649651 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:21:47.650936 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:21:47.661393 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:21:47.672438 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:47.678445 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:21:47.686666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:47.688059 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:47.688736 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:21:47.689830 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:21:47.689990 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:47.691197 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:21:47.691937 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:21:47.692983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:21:47.693927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:47.694871 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:47.695989 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:21:47.697065 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:47.698154 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:21:47.699202 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:21:47.700298 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:21:47.701255 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:21:47.701379 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:47.703265 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:47.704443 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:47.705640 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:21:47.706414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:47.706947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:21:47.707079 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:47.708424 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:21:47.708555 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:47.710000 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:21:47.710109 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:21:47.710916 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:21:47.711000 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:47.723466 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:21:47.724645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:21:47.724866 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:47.729460 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:21:47.730642 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:21:47.730777 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:47.736435 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:21:47.738812 ignition[1016]: INFO : Ignition 2.19.0 Nov 1 00:21:47.738812 ignition[1016]: INFO : Stage: umount Nov 1 00:21:47.738812 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:47.738812 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 1 00:21:47.738812 ignition[1016]: INFO : umount: umount passed Nov 1 00:21:47.738812 ignition[1016]: INFO : Ignition finished successfully Nov 1 00:21:47.736598 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:47.741954 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:21:47.742043 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:21:47.745183 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:21:47.745278 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:21:47.746589 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:21:47.746627 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:21:47.748067 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:21:47.748108 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:21:47.748574 systemd[1]: Stopped target network.target - Network. Nov 1 00:21:47.749204 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:21:47.749245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:47.750505 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:21:47.751653 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:21:47.753304 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:47.753817 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:21:47.755949 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:21:47.758249 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:21:47.758302 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:47.759200 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:21:47.759238 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:47.765434 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:21:47.765537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:21:47.767035 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:21:47.767106 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:47.774093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:21:47.776002 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:21:47.778582 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:21:47.779246 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:21:47.779330 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:21:47.781882 systemd-networkd[779]: eth1: DHCPv6 lease lost Nov 1 00:21:47.784906 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:21:47.785196 systemd-networkd[779]: eth0: DHCPv6 lease lost Nov 1 00:21:47.785414 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:21:47.786838 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:21:47.786955 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:47.790405 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:21:47.790554 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:21:47.792391 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:21:47.792539 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:21:47.797479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:21:47.797555 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:47.804437 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:21:47.807398 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:21:47.807551 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:47.808961 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:47.809030 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:47.812411 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:21:47.812491 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:47.813682 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:21:47.813748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:47.815038 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:47.825515 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:21:47.825612 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:21:47.826991 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:21:47.827094 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:47.828342 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:21:47.828388 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:47.829478 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:21:47.829508 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:47.830632 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:21:47.830670 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:47.832231 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:21:47.832265 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:47.833578 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:47.833647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:47.841336 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:21:47.842870 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:21:47.842942 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:47.844350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:47.844411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:47.847583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:21:47.847700 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:21:47.848956 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:21:47.851325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:21:47.872706 systemd[1]: Switching root. Nov 1 00:21:47.916379 systemd-journald[187]: Journal stopped Nov 1 00:21:48.799602 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Nov 1 00:21:48.799654 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:21:48.799665 kernel: SELinux: policy capability open_perms=1 Nov 1 00:21:48.799673 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:21:48.799680 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:21:48.799688 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:21:48.799698 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:21:48.799706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:21:48.799713 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:21:48.799721 kernel: audit: type=1403 audit(1761956508.069:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:21:48.799730 systemd[1]: Successfully loaded SELinux policy in 45.351ms. Nov 1 00:21:48.799747 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.162ms. Nov 1 00:21:48.799758 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:21:48.799767 systemd[1]: Detected virtualization kvm. Nov 1 00:21:48.799777 systemd[1]: Detected architecture x86-64. Nov 1 00:21:48.799785 systemd[1]: Detected first boot. Nov 1 00:21:48.799793 systemd[1]: Hostname set to . Nov 1 00:21:48.799802 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:21:48.799810 zram_generator::config[1079]: No configuration found. Nov 1 00:21:48.799819 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:21:48.799827 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:21:48.799835 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:21:48.799845 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:21:48.799853 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:21:48.799861 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:21:48.799869 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:21:48.799877 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:21:48.799886 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:21:48.799908 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:21:48.799917 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:21:48.799926 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:48.799936 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:48.799945 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:21:48.799953 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:21:48.799961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:21:48.799969 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:48.799978 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:21:48.799986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:48.799994 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:21:48.800004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:48.800013 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:48.800023 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:48.800031 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:48.800040 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:21:48.800049 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:21:48.800057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:21:48.800067 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:21:48.800075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:48.800083 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:48.800091 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:48.800099 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:21:48.800107 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:21:48.800115 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:21:48.800244 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:21:48.800263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:48.800279 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:21:48.800289 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:21:48.800297 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:21:48.800305 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:21:48.800313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:48.800321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:48.800330 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:21:48.800339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:48.800347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:48.800355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:48.800363 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:21:48.800371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:48.800380 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:48.800388 kernel: fuse: init (API version 7.39) Nov 1 00:21:48.800398 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:21:48.800407 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:21:48.800415 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:48.800423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:48.800431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:21:48.800439 kernel: loop: module loaded Nov 1 00:21:48.800449 kernel: ACPI: bus type drm_connector registered Nov 1 00:21:48.800457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:21:48.800482 systemd-journald[1174]: Collecting audit messages is disabled. Nov 1 00:21:48.800504 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:48.800514 systemd-journald[1174]: Journal started Nov 1 00:21:48.800531 systemd-journald[1174]: Runtime Journal (/run/log/journal/2a7f518c7da74c72874d68622a9ee067) is 4.8M, max 38.4M, 33.6M free. Nov 1 00:21:48.807492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:48.807538 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:48.808841 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:21:48.809692 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:21:48.810320 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:21:48.810912 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:21:48.815005 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:21:48.815622 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:21:48.816397 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:21:48.817334 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:48.818061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:21:48.818285 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:21:48.818988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:48.819334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:48.820010 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:48.820286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:48.820946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:48.821113 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:48.822081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:21:48.822272 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:21:48.822937 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:48.823301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:48.824046 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:48.824780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:48.825807 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:21:48.833789 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:21:48.838226 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:21:48.842204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:21:48.845057 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:48.858345 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:21:48.865350 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:21:48.866004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:48.869278 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:21:48.872241 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:48.878949 systemd-journald[1174]: Time spent on flushing to /var/log/journal/2a7f518c7da74c72874d68622a9ee067 is 19.078ms for 1114 entries. Nov 1 00:21:48.878949 systemd-journald[1174]: System Journal (/var/log/journal/2a7f518c7da74c72874d68622a9ee067) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:21:48.911318 systemd-journald[1174]: Received client request to flush runtime journal. Nov 1 00:21:48.880768 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:48.884737 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:48.886459 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:21:48.888253 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:21:48.900306 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:21:48.901195 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:21:48.913499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:21:48.925845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:48.934357 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:21:48.937352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:48.946998 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 1 00:21:48.947023 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Nov 1 00:21:48.956952 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:48.963422 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:21:48.965248 udevadm[1230]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:21:48.986851 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:21:49.002364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:49.014834 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Nov 1 00:21:49.015164 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Nov 1 00:21:49.019274 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:49.352708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:21:49.358429 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:49.376919 systemd-udevd[1247]: Using default interface naming scheme 'v255'. Nov 1 00:21:49.407066 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:49.415979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:49.433846 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:21:49.449721 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 1 00:21:49.495783 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:21:49.539155 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:21:49.542364 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 1 00:21:49.556749 systemd-networkd[1251]: lo: Link UP Nov 1 00:21:49.556761 systemd-networkd[1251]: lo: Gained carrier Nov 1 00:21:49.558663 systemd-networkd[1251]: Enumeration completed Nov 1 00:21:49.558775 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:49.562544 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.562550 systemd-networkd[1251]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:49.563221 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.563224 systemd-networkd[1251]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:49.563743 systemd-networkd[1251]: eth0: Link UP Nov 1 00:21:49.563746 systemd-networkd[1251]: eth0: Gained carrier Nov 1 00:21:49.563759 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.567237 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:21:49.567381 systemd-networkd[1251]: eth1: Link UP Nov 1 00:21:49.567384 systemd-networkd[1251]: eth1: Gained carrier Nov 1 00:21:49.567395 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.570704 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:21:49.573040 systemd-networkd[1251]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.588653 systemd-networkd[1251]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:49.590677 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1258) Nov 1 00:21:49.598674 systemd-networkd[1251]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 1 00:21:49.610083 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Nov 1 00:21:49.610267 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 1 00:21:49.610316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:49.610420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:49.621280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:49.629284 systemd-networkd[1251]: eth0: DHCPv4 address 46.62.149.99/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 1 00:21:49.629444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:49.634237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:49.635519 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:49.635598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:49.635647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:49.636517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:49.636889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:49.644857 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:49.645006 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:49.653539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:49.653724 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:49.667681 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 1 00:21:49.667739 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 1 00:21:49.673608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:49.673659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:49.680461 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:21:49.690162 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:21:49.690491 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 00:21:49.690627 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:21:49.694369 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:21:49.694407 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Nov 1 00:21:49.699166 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 1 00:21:49.699212 kernel: [drm] features: -context_init Nov 1 00:21:49.715242 kernel: [drm] number of scanouts: 1 Nov 1 00:21:49.715321 kernel: [drm] number of cap sets: 0 Nov 1 00:21:49.720926 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 1 00:21:49.721211 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 1 00:21:49.729188 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 1 00:21:49.729287 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 00:21:49.736949 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 1 00:21:49.744361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:49.745436 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:49.745655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:49.754352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:49.762735 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:49.763081 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:49.770336 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:49.818640 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:49.887648 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:21:49.906592 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:21:49.919223 lvm[1316]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:49.946246 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:21:49.946971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:49.952359 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:21:49.958655 lvm[1319]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:49.984539 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:21:49.985967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:21:49.988286 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:21:49.988334 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:49.988572 systemd[1]: Reached target machines.target - Containers. Nov 1 00:21:49.989723 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:21:49.994482 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:21:49.998054 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:21:50.001999 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:50.004322 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:21:50.013346 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:21:50.015919 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:21:50.018226 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:21:50.034036 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:21:50.045191 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:21:50.055741 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:21:50.056825 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:21:50.082157 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:21:50.106168 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 00:21:50.144217 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:21:50.189229 kernel: loop3: detected capacity change from 0 to 8 Nov 1 00:21:50.213584 kernel: loop4: detected capacity change from 0 to 224512 Nov 1 00:21:50.240213 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 00:21:50.262673 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 00:21:50.284101 kernel: loop7: detected capacity change from 0 to 8 Nov 1 00:21:50.285915 (sd-merge)[1341]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 1 00:21:50.286511 (sd-merge)[1341]: Merged extensions into '/usr'. Nov 1 00:21:50.294155 systemd[1]: Reloading requested from client PID 1327 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:21:50.294367 systemd[1]: Reloading... Nov 1 00:21:50.383202 zram_generator::config[1378]: No configuration found. Nov 1 00:21:50.464207 ldconfig[1323]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:21:50.491069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:50.546345 systemd[1]: Reloading finished in 251 ms. Nov 1 00:21:50.562438 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:21:50.570376 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:21:50.581336 systemd[1]: Starting ensure-sysext.service... Nov 1 00:21:50.586419 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:50.591861 systemd[1]: Reloading requested from client PID 1419 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:21:50.591884 systemd[1]: Reloading... Nov 1 00:21:50.602473 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:21:50.602740 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:21:50.603347 systemd-tmpfiles[1420]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:21:50.603532 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Nov 1 00:21:50.603575 systemd-tmpfiles[1420]: ACLs are not supported, ignoring. Nov 1 00:21:50.606336 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:50.606415 systemd-tmpfiles[1420]: Skipping /boot Nov 1 00:21:50.614946 systemd-tmpfiles[1420]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:50.615028 systemd-tmpfiles[1420]: Skipping /boot Nov 1 00:21:50.665191 zram_generator::config[1452]: No configuration found. Nov 1 00:21:50.771239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:50.830395 systemd-networkd[1251]: eth0: Gained IPv6LL Nov 1 00:21:50.839607 systemd[1]: Reloading finished in 247 ms. Nov 1 00:21:50.856732 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:21:50.866706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:50.879553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:50.893435 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:21:50.898193 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:21:50.908795 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:50.915376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:21:50.921276 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:50.923531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:50.930362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:50.934456 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:50.948650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:50.949328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:50.949427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:50.966792 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:21:50.971841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:50.974564 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:50.981798 augenrules[1527]: No rules Nov 1 00:21:50.976754 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:50.976932 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:50.986760 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:50.995953 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:21:51.000540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:51.001022 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:51.008648 systemd[1]: Finished ensure-sysext.service. Nov 1 00:21:51.011351 systemd-resolved[1511]: Positive Trust Anchors: Nov 1 00:21:51.011364 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:51.011388 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:51.014755 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:51.015000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:51.016991 systemd-resolved[1511]: Using system hostname 'ci-4081-3-6-n-b21903d23a'. Nov 1 00:21:51.019285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:51.025247 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:51.029254 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:51.034260 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:51.034710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:51.044263 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:21:51.052276 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:21:51.053628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:51.053920 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:51.059187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:51.065340 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:51.066236 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:21:51.067007 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:51.071285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:51.076470 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:51.076665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:51.079427 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:51.079655 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:51.082355 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:21:51.088314 systemd[1]: Reached target network.target - Network. Nov 1 00:21:51.089395 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:21:51.089853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:51.091431 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:51.091518 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:51.091545 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:21:51.134942 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:21:51.137797 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:51.138718 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:21:51.140159 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:21:51.141009 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:21:51.141992 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:21:51.142181 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:51.142917 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:21:51.143996 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:21:51.144696 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:21:51.145554 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:51.148255 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:21:51.151727 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:21:51.151965 systemd-networkd[1251]: eth1: Gained IPv6LL Nov 1 00:21:51.154030 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:21:51.159454 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:21:51.160156 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:51.160666 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:51.162393 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:21:51.162453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:51.162477 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:51.163508 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:21:51.169327 systemd-timesyncd[1552]: Contacted time server 85.214.133.14:123 (0.flatcar.pool.ntp.org). Nov 1 00:21:51.169373 systemd-timesyncd[1552]: Initial clock synchronization to Sat 2025-11-01 00:21:51.276279 UTC. Nov 1 00:21:51.171297 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:21:51.179318 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:21:51.184372 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:21:51.193585 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:21:51.194122 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:21:51.197461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:51.203399 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:21:51.210556 coreos-metadata[1569]: Nov 01 00:21:51.210 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 1 00:21:51.222311 coreos-metadata[1569]: Nov 01 00:21:51.211 INFO Fetch successful Nov 1 00:21:51.222311 coreos-metadata[1569]: Nov 01 00:21:51.211 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 1 00:21:51.222311 coreos-metadata[1569]: Nov 01 00:21:51.212 INFO Fetch successful Nov 1 00:21:51.222395 jq[1574]: false Nov 1 00:21:51.223391 extend-filesystems[1575]: Found loop4 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found loop5 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found loop6 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found loop7 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda1 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda2 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda3 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found usr Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda4 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda6 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda7 Nov 1 00:21:51.225917 extend-filesystems[1575]: Found sda9 Nov 1 00:21:51.225917 extend-filesystems[1575]: Checking size of /dev/sda9 Nov 1 00:21:51.273213 extend-filesystems[1575]: Resized partition /dev/sda9 Nov 1 00:21:51.295720 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 1 00:21:51.262237 dbus-daemon[1570]: [system] SELinux support is enabled Nov 1 00:21:51.315877 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1248) Nov 1 00:21:51.228485 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:21:51.316006 extend-filesystems[1591]: resize2fs 1.47.1 (20-May-2024) Nov 1 00:21:51.245314 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:21:51.259366 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 1 00:21:51.288364 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:21:51.297314 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:21:51.309266 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:21:51.315592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:21:51.320728 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:21:51.329207 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:21:51.337823 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:21:51.343616 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:21:51.346389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:21:51.354052 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:21:51.354273 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:21:51.365735 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:21:51.365990 update_engine[1611]: I20251101 00:21:51.365921 1611 main.cc:92] Flatcar Update Engine starting Nov 1 00:21:51.368562 update_engine[1611]: I20251101 00:21:51.368532 1611 update_check_scheduler.cc:74] Next update check in 6m41s Nov 1 00:21:51.376162 jq[1612]: true Nov 1 00:21:51.374839 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:21:51.375683 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:21:51.420459 (ntainerd)[1624]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:21:51.423215 systemd-logind[1610]: New seat seat0. Nov 1 00:21:51.425012 systemd-logind[1610]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 00:21:51.426189 systemd-logind[1610]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:21:51.426477 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:21:51.439879 dbus-daemon[1570]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:21:51.443445 jq[1622]: true Nov 1 00:21:51.459721 tar[1620]: linux-amd64/LICENSE Nov 1 00:21:51.461521 tar[1620]: linux-amd64/helm Nov 1 00:21:51.493094 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:21:51.499617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:21:51.499755 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:21:51.500237 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:21:51.500335 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:21:51.504676 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:21:51.515368 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:21:51.532710 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:21:51.539181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:21:51.617187 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 1 00:21:51.637896 extend-filesystems[1591]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:21:51.637896 extend-filesystems[1591]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 1 00:21:51.637896 extend-filesystems[1591]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 1 00:21:51.637531 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:21:51.653882 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:51.653983 extend-filesystems[1575]: Resized filesystem in /dev/sda9 Nov 1 00:21:51.653983 extend-filesystems[1575]: Found sr0 Nov 1 00:21:51.637755 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:21:51.646503 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:21:51.658419 systemd[1]: Starting sshkeys.service... Nov 1 00:21:51.670826 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:21:51.677943 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:21:51.704340 sshd_keygen[1623]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:21:51.712213 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:21:51.721723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:21:51.734387 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:21:51.743216 coreos-metadata[1680]: Nov 01 00:21:51.743 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 1 00:21:51.743817 coreos-metadata[1680]: Nov 01 00:21:51.743 INFO Fetch successful Nov 1 00:21:51.748851 unknown[1680]: wrote ssh authorized keys file for user: core Nov 1 00:21:51.764688 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:21:51.765055 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:21:51.781982 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:21:51.796382 update-ssh-keys[1698]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:51.799591 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:21:51.806501 systemd[1]: Finished sshkeys.service. Nov 1 00:21:51.812122 containerd[1624]: time="2025-11-01T00:21:51.812055354Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:21:51.815537 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:21:51.824428 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:21:51.834547 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:21:51.839142 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:21:51.855256 containerd[1624]: time="2025-11-01T00:21:51.855026728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.857552 containerd[1624]: time="2025-11-01T00:21:51.857522128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.857621925Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.857639207Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859233678Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859251341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859301926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859313337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859501691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859516619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859529293Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859537157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859594174Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860083 containerd[1624]: time="2025-11-01T00:21:51.859757511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860368 containerd[1624]: time="2025-11-01T00:21:51.859860493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:51.860368 containerd[1624]: time="2025-11-01T00:21:51.859872346Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:21:51.860368 containerd[1624]: time="2025-11-01T00:21:51.859951685Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:21:51.860368 containerd[1624]: time="2025-11-01T00:21:51.859994475Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:21:51.865458 containerd[1624]: time="2025-11-01T00:21:51.865438285Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:21:51.865541 containerd[1624]: time="2025-11-01T00:21:51.865530067Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:21:51.865612 containerd[1624]: time="2025-11-01T00:21:51.865602273Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:21:51.865674 containerd[1624]: time="2025-11-01T00:21:51.865663697Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:21:51.865721 containerd[1624]: time="2025-11-01T00:21:51.865712259Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:21:51.865875 containerd[1624]: time="2025-11-01T00:21:51.865860527Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:21:51.866207 containerd[1624]: time="2025-11-01T00:21:51.866190435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:21:51.866328 containerd[1624]: time="2025-11-01T00:21:51.866314969Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:21:51.866966 containerd[1624]: time="2025-11-01T00:21:51.866952835Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:21:51.867015 containerd[1624]: time="2025-11-01T00:21:51.867005945Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:21:51.867056 containerd[1624]: time="2025-11-01T00:21:51.867046922Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.867113 containerd[1624]: time="2025-11-01T00:21:51.867102416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.867181 containerd[1624]: time="2025-11-01T00:21:51.867171296Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.867246 containerd[1624]: time="2025-11-01T00:21:51.867214506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867294516Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867309895Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867321797Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867331145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867349820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867360821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867370599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867380708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867389995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867409743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867419461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867430481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867440861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868143 containerd[1624]: time="2025-11-01T00:21:51.867452923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867463413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867472530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867483080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867494462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867512154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867521211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867529356Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867561647Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867574842Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867582797Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867591964Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867599398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867610168Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:21:51.868351 containerd[1624]: time="2025-11-01T00:21:51.867618203Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:21:51.868532 containerd[1624]: time="2025-11-01T00:21:51.867625808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:21:51.868547 containerd[1624]: time="2025-11-01T00:21:51.867825202Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:21:51.868547 containerd[1624]: time="2025-11-01T00:21:51.867875216Z" level=info msg="Connect containerd service" Nov 1 00:21:51.868547 containerd[1624]: time="2025-11-01T00:21:51.867918417Z" level=info msg="using legacy CRI server" Nov 1 00:21:51.868547 containerd[1624]: time="2025-11-01T00:21:51.867925971Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:21:51.868547 containerd[1624]: time="2025-11-01T00:21:51.868001222Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:21:51.869535 containerd[1624]: time="2025-11-01T00:21:51.869498289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:21:51.869845 containerd[1624]: time="2025-11-01T00:21:51.869820614Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:21:51.870177 containerd[1624]: time="2025-11-01T00:21:51.870163347Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:21:51.870278 containerd[1624]: time="2025-11-01T00:21:51.870255320Z" level=info msg="Start subscribing containerd event" Nov 1 00:21:51.870341 containerd[1624]: time="2025-11-01T00:21:51.870330541Z" level=info msg="Start recovering state" Nov 1 00:21:51.870418 containerd[1624]: time="2025-11-01T00:21:51.870407525Z" level=info msg="Start event monitor" Nov 1 00:21:51.870463 containerd[1624]: time="2025-11-01T00:21:51.870454743Z" level=info msg="Start snapshots syncer" Nov 1 00:21:51.870498 containerd[1624]: time="2025-11-01T00:21:51.870490911Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:21:51.870591 containerd[1624]: time="2025-11-01T00:21:51.870580780Z" level=info msg="Start streaming server" Nov 1 00:21:51.870677 containerd[1624]: time="2025-11-01T00:21:51.870666430Z" level=info msg="containerd successfully booted in 0.060870s" Nov 1 00:21:51.871572 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:21:52.163225 tar[1620]: linux-amd64/README.md Nov 1 00:21:52.175419 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:21:52.578299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:52.581890 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:21:52.583347 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:52.586437 systemd[1]: Startup finished in 6.657s (kernel) + 4.561s (userspace) = 11.219s. Nov 1 00:21:53.165809 kubelet[1728]: E1101 00:21:53.165728 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:53.168801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:53.169010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:01.525930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:22:01.531538 systemd[1]: Started sshd@0-46.62.149.99:22-147.75.109.163:56846.service - OpenSSH per-connection server daemon (147.75.109.163:56846). Nov 1 00:22:02.548198 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 56846 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:02.549642 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:02.557528 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:22:02.563681 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:22:02.566053 systemd-logind[1610]: New session 1 of user core. Nov 1 00:22:02.578289 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:22:02.585425 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:22:02.588368 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:02.691062 systemd[1746]: Queued start job for default target default.target. Nov 1 00:22:02.691392 systemd[1746]: Created slice app.slice - User Application Slice. Nov 1 00:22:02.691408 systemd[1746]: Reached target paths.target - Paths. Nov 1 00:22:02.691419 systemd[1746]: Reached target timers.target - Timers. Nov 1 00:22:02.696261 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:22:02.702779 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:22:02.702853 systemd[1746]: Reached target sockets.target - Sockets. Nov 1 00:22:02.702869 systemd[1746]: Reached target basic.target - Basic System. Nov 1 00:22:02.702909 systemd[1746]: Reached target default.target - Main User Target. Nov 1 00:22:02.702935 systemd[1746]: Startup finished in 109ms. Nov 1 00:22:02.703190 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:22:02.705299 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:22:03.405534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:03.417380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:03.420396 systemd[1]: Started sshd@1-46.62.149.99:22-147.75.109.163:56848.service - OpenSSH per-connection server daemon (147.75.109.163:56848). Nov 1 00:22:03.522311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:03.526703 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:03.567533 kubelet[1772]: E1101 00:22:03.567472 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:03.570918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:03.571065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:04.426402 sshd[1759]: Accepted publickey for core from 147.75.109.163 port 56848 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:04.428043 sshd[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:04.432502 systemd-logind[1610]: New session 2 of user core. Nov 1 00:22:04.439339 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:22:05.118064 sshd[1759]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:05.120691 systemd[1]: sshd@1-46.62.149.99:22-147.75.109.163:56848.service: Deactivated successfully. Nov 1 00:22:05.123452 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:22:05.123824 systemd-logind[1610]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:22:05.124935 systemd-logind[1610]: Removed session 2. Nov 1 00:22:05.288410 systemd[1]: Started sshd@2-46.62.149.99:22-147.75.109.163:56856.service - OpenSSH per-connection server daemon (147.75.109.163:56856). Nov 1 00:22:06.299576 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 56856 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:06.301500 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:06.308416 systemd-logind[1610]: New session 3 of user core. Nov 1 00:22:06.313465 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:22:07.000187 sshd[1786]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:07.002802 systemd[1]: sshd@2-46.62.149.99:22-147.75.109.163:56856.service: Deactivated successfully. Nov 1 00:22:07.007588 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:22:07.007640 systemd-logind[1610]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:22:07.010954 systemd-logind[1610]: Removed session 3. Nov 1 00:22:07.166327 systemd[1]: Started sshd@3-46.62.149.99:22-147.75.109.163:56860.service - OpenSSH per-connection server daemon (147.75.109.163:56860). Nov 1 00:22:08.162278 sshd[1794]: Accepted publickey for core from 147.75.109.163 port 56860 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:08.163686 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:08.168350 systemd-logind[1610]: New session 4 of user core. Nov 1 00:22:08.174467 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:22:08.854640 sshd[1794]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:08.857169 systemd[1]: sshd@3-46.62.149.99:22-147.75.109.163:56860.service: Deactivated successfully. Nov 1 00:22:08.860890 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:08.861491 systemd-logind[1610]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:08.862696 systemd-logind[1610]: Removed session 4. Nov 1 00:22:09.056378 systemd[1]: Started sshd@4-46.62.149.99:22-147.75.109.163:56874.service - OpenSSH per-connection server daemon (147.75.109.163:56874). Nov 1 00:22:10.156525 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 56874 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:10.157732 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:10.162205 systemd-logind[1610]: New session 5 of user core. Nov 1 00:22:10.168430 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:22:10.749530 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:10.749917 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:10.766223 sudo[1806]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:10.947228 sshd[1802]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:10.950408 systemd[1]: sshd@4-46.62.149.99:22-147.75.109.163:56874.service: Deactivated successfully. Nov 1 00:22:10.954234 systemd-logind[1610]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:22:10.954520 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:22:10.956059 systemd-logind[1610]: Removed session 5. Nov 1 00:22:11.140486 systemd[1]: Started sshd@5-46.62.149.99:22-147.75.109.163:38840.service - OpenSSH per-connection server daemon (147.75.109.163:38840). Nov 1 00:22:12.253671 sshd[1811]: Accepted publickey for core from 147.75.109.163 port 38840 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:12.255167 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:12.259688 systemd-logind[1610]: New session 6 of user core. Nov 1 00:22:12.265484 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:22:12.843329 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:12.843629 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:12.847665 sudo[1816]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:12.853743 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:22:12.854145 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:12.866327 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:12.869525 auditctl[1819]: No rules Nov 1 00:22:12.869864 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:12.870178 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:12.880558 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:12.902470 augenrules[1838]: No rules Nov 1 00:22:12.904254 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:12.906757 sudo[1815]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:13.089644 sshd[1811]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:13.094803 systemd[1]: sshd@5-46.62.149.99:22-147.75.109.163:38840.service: Deactivated successfully. Nov 1 00:22:13.095156 systemd-logind[1610]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:13.098767 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:13.099878 systemd-logind[1610]: Removed session 6. Nov 1 00:22:13.279650 systemd[1]: Started sshd@6-46.62.149.99:22-147.75.109.163:38856.service - OpenSSH per-connection server daemon (147.75.109.163:38856). Nov 1 00:22:13.682737 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:22:13.688593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:13.819017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:13.821692 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:13.860203 kubelet[1861]: E1101 00:22:13.860119 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:13.864361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:13.864620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:14.388325 sshd[1847]: Accepted publickey for core from 147.75.109.163 port 38856 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:22:14.389631 sshd[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:14.395878 systemd-logind[1610]: New session 7 of user core. Nov 1 00:22:14.402556 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:22:14.975172 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:14.975439 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:15.251516 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:22:15.252786 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:22:15.526560 dockerd[1888]: time="2025-11-01T00:22:15.526437652Z" level=info msg="Starting up" Nov 1 00:22:15.629652 systemd[1]: var-lib-docker-metacopy\x2dcheck2616928736-merged.mount: Deactivated successfully. Nov 1 00:22:15.647977 dockerd[1888]: time="2025-11-01T00:22:15.647914440Z" level=info msg="Loading containers: start." Nov 1 00:22:15.745174 kernel: Initializing XFRM netlink socket Nov 1 00:22:15.817260 systemd-networkd[1251]: docker0: Link UP Nov 1 00:22:15.835394 dockerd[1888]: time="2025-11-01T00:22:15.835340880Z" level=info msg="Loading containers: done." Nov 1 00:22:15.850084 dockerd[1888]: time="2025-11-01T00:22:15.850017078Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:15.850234 dockerd[1888]: time="2025-11-01T00:22:15.850184174Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:22:15.850305 dockerd[1888]: time="2025-11-01T00:22:15.850279271Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:15.878829 dockerd[1888]: time="2025-11-01T00:22:15.877631840Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:15.877850 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:22:17.229472 containerd[1624]: time="2025-11-01T00:22:17.229209490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:22:17.800385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302909328.mount: Deactivated successfully. Nov 1 00:22:18.769658 containerd[1624]: time="2025-11-01T00:22:18.769604339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.770905 containerd[1624]: time="2025-11-01T00:22:18.770863081Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28838016" Nov 1 00:22:18.772324 containerd[1624]: time="2025-11-01T00:22:18.772283716Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.774631 containerd[1624]: time="2025-11-01T00:22:18.774596452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.775659 containerd[1624]: time="2025-11-01T00:22:18.775634660Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.546387732s" Nov 1 00:22:18.776143 containerd[1624]: time="2025-11-01T00:22:18.775768696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:22:18.776757 containerd[1624]: time="2025-11-01T00:22:18.776619045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:22:20.060755 containerd[1624]: time="2025-11-01T00:22:20.060693511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:20.061904 containerd[1624]: time="2025-11-01T00:22:20.061850689Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787049" Nov 1 00:22:20.063079 containerd[1624]: time="2025-11-01T00:22:20.062774444Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:20.065351 containerd[1624]: time="2025-11-01T00:22:20.065322652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:20.066202 containerd[1624]: time="2025-11-01T00:22:20.066177928Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.289531623s" Nov 1 00:22:20.066273 containerd[1624]: time="2025-11-01T00:22:20.066260079Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:22:20.066737 containerd[1624]: time="2025-11-01T00:22:20.066642627Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:22:21.036448 containerd[1624]: time="2025-11-01T00:22:21.036396972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:21.037378 containerd[1624]: time="2025-11-01T00:22:21.037335774Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176311" Nov 1 00:22:21.038084 containerd[1624]: time="2025-11-01T00:22:21.038049962Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:21.041150 containerd[1624]: time="2025-11-01T00:22:21.040581312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:21.041710 containerd[1624]: time="2025-11-01T00:22:21.041492725Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 974.694246ms" Nov 1 00:22:21.041710 containerd[1624]: time="2025-11-01T00:22:21.041518711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:22:21.042001 containerd[1624]: time="2025-11-01T00:22:21.041901445Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:22:21.963312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862112257.mount: Deactivated successfully. Nov 1 00:22:22.274484 containerd[1624]: time="2025-11-01T00:22:22.274196903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.275666 containerd[1624]: time="2025-11-01T00:22:22.275597480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924234" Nov 1 00:22:22.276471 containerd[1624]: time="2025-11-01T00:22:22.276435054Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.279138 containerd[1624]: time="2025-11-01T00:22:22.279017375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.279890 containerd[1624]: time="2025-11-01T00:22:22.279393853Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.237467624s" Nov 1 00:22:22.279890 containerd[1624]: time="2025-11-01T00:22:22.279420509Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:22:22.280640 containerd[1624]: time="2025-11-01T00:22:22.280591390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:22:22.824095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4153295885.mount: Deactivated successfully. Nov 1 00:22:23.461928 containerd[1624]: time="2025-11-01T00:22:23.461840790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.464306 containerd[1624]: time="2025-11-01T00:22:23.464207074Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565335" Nov 1 00:22:23.465573 containerd[1624]: time="2025-11-01T00:22:23.465462238Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.468780 containerd[1624]: time="2025-11-01T00:22:23.468289376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.469714 containerd[1624]: time="2025-11-01T00:22:23.469678504Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.189058504s" Nov 1 00:22:23.469714 containerd[1624]: time="2025-11-01T00:22:23.469710080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:22:23.470592 containerd[1624]: time="2025-11-01T00:22:23.470562253Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:22:23.896193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:22:23.902760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:23.904766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341286073.mount: Deactivated successfully. Nov 1 00:22:23.912157 containerd[1624]: time="2025-11-01T00:22:23.911713163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.913219 containerd[1624]: time="2025-11-01T00:22:23.913180888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 1 00:22:23.914355 containerd[1624]: time="2025-11-01T00:22:23.914318884Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.917116 containerd[1624]: time="2025-11-01T00:22:23.917089302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:23.918631 containerd[1624]: time="2025-11-01T00:22:23.918007513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 447.415549ms" Nov 1 00:22:23.918731 containerd[1624]: time="2025-11-01T00:22:23.918716052Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:22:23.919779 containerd[1624]: time="2025-11-01T00:22:23.919762384Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:22:24.013296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:24.013573 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:24.052905 kubelet[2168]: E1101 00:22:24.052825 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:24.054815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:24.054982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:24.425211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247985923.mount: Deactivated successfully. Nov 1 00:22:25.838003 containerd[1624]: time="2025-11-01T00:22:25.837719435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.839158 containerd[1624]: time="2025-11-01T00:22:25.839096847Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682132" Nov 1 00:22:25.840166 containerd[1624]: time="2025-11-01T00:22:25.839953315Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.845622 containerd[1624]: time="2025-11-01T00:22:25.845555525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:25.850429 containerd[1624]: time="2025-11-01T00:22:25.850387127Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.930517756s" Nov 1 00:22:25.850488 containerd[1624]: time="2025-11-01T00:22:25.850433525Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:22:28.541842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:28.550530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:28.587405 systemd[1]: Reloading requested from client PID 2259 ('systemctl') (unit session-7.scope)... Nov 1 00:22:28.587430 systemd[1]: Reloading... Nov 1 00:22:28.673150 zram_generator::config[2302]: No configuration found. Nov 1 00:22:28.755879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:28.814260 systemd[1]: Reloading finished in 226 ms. Nov 1 00:22:28.858476 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:28.859638 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:28.860031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:28.864625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:28.977291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:28.986477 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:29.027152 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:29.027152 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:29.027152 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:29.027152 kubelet[2368]: I1101 00:22:29.025996 2368 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:29.436150 kubelet[2368]: I1101 00:22:29.436096 2368 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:29.436328 kubelet[2368]: I1101 00:22:29.436317 2368 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:29.436634 kubelet[2368]: I1101 00:22:29.436624 2368 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:29.468552 kubelet[2368]: I1101 00:22:29.468498 2368 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:29.472165 kubelet[2368]: E1101 00:22:29.470087 2368 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.62.149.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.482272 kubelet[2368]: E1101 00:22:29.482218 2368 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:29.482272 kubelet[2368]: I1101 00:22:29.482262 2368 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:29.487089 kubelet[2368]: I1101 00:22:29.487042 2368 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:29.489146 kubelet[2368]: I1101 00:22:29.489084 2368 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:29.489371 kubelet[2368]: I1101 00:22:29.489122 2368 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-b21903d23a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:22:29.491691 kubelet[2368]: I1101 00:22:29.491650 2368 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:29.491691 kubelet[2368]: I1101 00:22:29.491674 2368 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:29.492853 kubelet[2368]: I1101 00:22:29.492816 2368 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:29.496143 kubelet[2368]: I1101 00:22:29.496104 2368 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:29.496196 kubelet[2368]: I1101 00:22:29.496147 2368 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:29.496196 kubelet[2368]: I1101 00:22:29.496170 2368 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:29.496196 kubelet[2368]: I1101 00:22:29.496183 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:29.507145 kubelet[2368]: I1101 00:22:29.507020 2368 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:29.511189 kubelet[2368]: W1101 00:22:29.510388 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.149.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:29.511189 kubelet[2368]: E1101 00:22:29.510456 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.149.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.511189 kubelet[2368]: W1101 00:22:29.510543 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.149.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-b21903d23a&limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:29.511189 kubelet[2368]: E1101 00:22:29.510579 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.149.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-b21903d23a&limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.511695 kubelet[2368]: I1101 00:22:29.511652 2368 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:29.511791 kubelet[2368]: W1101 00:22:29.511738 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:29.512440 kubelet[2368]: I1101 00:22:29.512419 2368 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:29.512508 kubelet[2368]: I1101 00:22:29.512458 2368 server.go:1287] "Started kubelet" Nov 1 00:22:29.514162 kubelet[2368]: I1101 00:22:29.513429 2368 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:29.514411 kubelet[2368]: I1101 00:22:29.514387 2368 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:29.517991 kubelet[2368]: I1101 00:22:29.517964 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:29.518812 kubelet[2368]: I1101 00:22:29.518744 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:29.519171 kubelet[2368]: I1101 00:22:29.519067 2368 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:29.523880 kubelet[2368]: E1101 00:22:29.520640 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.149.99:6443/api/v1/namespaces/default/events\": dial tcp 46.62.149.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-b21903d23a.1873ba25835c9484 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-b21903d23a,UID:ci-4081-3-6-n-b21903d23a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-b21903d23a,},FirstTimestamp:2025-11-01 00:22:29.51243482 +0000 UTC m=+0.522441373,LastTimestamp:2025-11-01 00:22:29.51243482 +0000 UTC m=+0.522441373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-b21903d23a,}" Nov 1 00:22:29.527761 kubelet[2368]: I1101 00:22:29.527744 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:29.529333 kubelet[2368]: I1101 00:22:29.529313 2368 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:29.529514 kubelet[2368]: E1101 00:22:29.529484 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:29.530422 kubelet[2368]: E1101 00:22:29.530312 2368 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:29.530422 kubelet[2368]: E1101 00:22:29.530391 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.149.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-b21903d23a?timeout=10s\": dial tcp 46.62.149.99:6443: connect: connection refused" interval="200ms" Nov 1 00:22:29.531793 kubelet[2368]: I1101 00:22:29.531161 2368 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:29.531793 kubelet[2368]: I1101 00:22:29.531221 2368 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:29.531793 kubelet[2368]: W1101 00:22:29.531618 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.149.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:29.531793 kubelet[2368]: E1101 00:22:29.531661 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.149.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.534969 kubelet[2368]: I1101 00:22:29.534947 2368 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:29.535042 kubelet[2368]: I1101 00:22:29.535034 2368 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:29.535566 kubelet[2368]: I1101 00:22:29.535543 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:29.554309 kubelet[2368]: I1101 00:22:29.554254 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:29.557555 kubelet[2368]: I1101 00:22:29.557525 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:29.557555 kubelet[2368]: I1101 00:22:29.557552 2368 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:29.557746 kubelet[2368]: I1101 00:22:29.557575 2368 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:29.557746 kubelet[2368]: I1101 00:22:29.557583 2368 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:29.557746 kubelet[2368]: E1101 00:22:29.557630 2368 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:29.561423 kubelet[2368]: W1101 00:22:29.561386 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.149.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:29.561535 kubelet[2368]: E1101 00:22:29.561516 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.149.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.573832 kubelet[2368]: I1101 00:22:29.573810 2368 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:29.573832 kubelet[2368]: I1101 00:22:29.573823 2368 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:29.573832 kubelet[2368]: I1101 00:22:29.573840 2368 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:29.575539 kubelet[2368]: I1101 00:22:29.575516 2368 policy_none.go:49] "None policy: Start" Nov 1 00:22:29.575539 kubelet[2368]: I1101 00:22:29.575536 2368 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:29.575621 kubelet[2368]: I1101 00:22:29.575548 2368 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:29.582832 kubelet[2368]: I1101 00:22:29.582780 2368 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:29.583035 kubelet[2368]: I1101 00:22:29.582964 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:29.583035 kubelet[2368]: I1101 00:22:29.582984 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:29.584032 kubelet[2368]: I1101 00:22:29.583991 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:29.584958 kubelet[2368]: E1101 00:22:29.584929 2368 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:29.585047 kubelet[2368]: E1101 00:22:29.584970 2368 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:29.667221 kubelet[2368]: E1101 00:22:29.667065 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.674690 kubelet[2368]: E1101 00:22:29.674642 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.677255 kubelet[2368]: E1101 00:22:29.677217 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.685331 kubelet[2368]: I1101 00:22:29.685288 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.685775 kubelet[2368]: E1101 00:22:29.685744 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.149.99:6443/api/v1/nodes\": dial tcp 46.62.149.99:6443: connect: connection refused" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.731794 kubelet[2368]: E1101 00:22:29.731618 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.149.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-b21903d23a?timeout=10s\": dial tcp 46.62.149.99:6443: connect: connection refused" interval="400ms" Nov 1 00:22:29.732000 kubelet[2368]: I1101 00:22:29.731816 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732000 kubelet[2368]: I1101 00:22:29.731852 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732000 kubelet[2368]: I1101 00:22:29.731891 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa766a77a2aad672ef8fc2509e0d3450-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-b21903d23a\" (UID: \"aa766a77a2aad672ef8fc2509e0d3450\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732000 kubelet[2368]: I1101 00:22:29.731926 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732000 kubelet[2368]: I1101 00:22:29.731947 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732117 kubelet[2368]: I1101 00:22:29.731965 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732117 kubelet[2368]: I1101 00:22:29.731988 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732117 kubelet[2368]: I1101 00:22:29.732021 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.732117 kubelet[2368]: I1101 00:22:29.732040 2368 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.888519 kubelet[2368]: I1101 00:22:29.888475 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.888866 kubelet[2368]: E1101 00:22:29.888832 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.149.99:6443/api/v1/nodes\": dial tcp 46.62.149.99:6443: connect: connection refused" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:29.969737 containerd[1624]: time="2025-11-01T00:22:29.969679523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-b21903d23a,Uid:3197ae34194f735040a200942411215a,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:29.976198 containerd[1624]: time="2025-11-01T00:22:29.976157370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-b21903d23a,Uid:7d33d855a6ea40d5810c5fe15382f787,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:29.978616 containerd[1624]: time="2025-11-01T00:22:29.978593325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-b21903d23a,Uid:aa766a77a2aad672ef8fc2509e0d3450,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:30.132848 kubelet[2368]: E1101 00:22:30.132697 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.149.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-b21903d23a?timeout=10s\": dial tcp 46.62.149.99:6443: connect: connection refused" interval="800ms" Nov 1 00:22:30.292083 kubelet[2368]: I1101 00:22:30.292027 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:30.292561 kubelet[2368]: E1101 00:22:30.292507 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.149.99:6443/api/v1/nodes\": dial tcp 46.62.149.99:6443: connect: connection refused" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:30.404755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4132349577.mount: Deactivated successfully. Nov 1 00:22:30.416754 containerd[1624]: time="2025-11-01T00:22:30.416696069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:30.417683 containerd[1624]: time="2025-11-01T00:22:30.417617974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312078" Nov 1 00:22:30.418267 containerd[1624]: time="2025-11-01T00:22:30.418221588Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:30.419165 containerd[1624]: time="2025-11-01T00:22:30.419036776Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:30.420730 containerd[1624]: time="2025-11-01T00:22:30.420650024Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:30.421167 containerd[1624]: time="2025-11-01T00:22:30.421109052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:30.421827 containerd[1624]: time="2025-11-01T00:22:30.421710432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:30.422682 containerd[1624]: time="2025-11-01T00:22:30.422624702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:30.425028 containerd[1624]: time="2025-11-01T00:22:30.424878631Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 455.10759ms" Nov 1 00:22:30.427050 containerd[1624]: time="2025-11-01T00:22:30.426997764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 448.262448ms" Nov 1 00:22:30.427567 containerd[1624]: time="2025-11-01T00:22:30.427522668Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 451.309943ms" Nov 1 00:22:30.549493 kubelet[2368]: W1101 00:22:30.549379 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.62.149.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:30.549493 kubelet[2368]: E1101 00:22:30.549452 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.62.149.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:30.556854 containerd[1624]: time="2025-11-01T00:22:30.556569848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.556854 containerd[1624]: time="2025-11-01T00:22:30.556677048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.556854 containerd[1624]: time="2025-11-01T00:22:30.556691898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.556854 containerd[1624]: time="2025-11-01T00:22:30.556782774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.565616 containerd[1624]: time="2025-11-01T00:22:30.565365684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.565616 containerd[1624]: time="2025-11-01T00:22:30.565476371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.565616 containerd[1624]: time="2025-11-01T00:22:30.565491202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.566049 containerd[1624]: time="2025-11-01T00:22:30.565931894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.583508 containerd[1624]: time="2025-11-01T00:22:30.582345716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.583508 containerd[1624]: time="2025-11-01T00:22:30.582439488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.583508 containerd[1624]: time="2025-11-01T00:22:30.582547017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.583508 containerd[1624]: time="2025-11-01T00:22:30.582662154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.662432 containerd[1624]: time="2025-11-01T00:22:30.661494905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-b21903d23a,Uid:aa766a77a2aad672ef8fc2509e0d3450,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec69da99126e063038292f461f9fa6e46d152408ca5543d97d36ea935fb9e715\"" Nov 1 00:22:30.666593 containerd[1624]: time="2025-11-01T00:22:30.666566264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-b21903d23a,Uid:7d33d855a6ea40d5810c5fe15382f787,Namespace:kube-system,Attempt:0,} returns sandbox id \"86cd091752e2c3b8e4fb313dde1cea9709057643fabd7f5fe0ebe7ed10d836c8\"" Nov 1 00:22:30.671576 containerd[1624]: time="2025-11-01T00:22:30.671354385Z" level=info msg="CreateContainer within sandbox \"ec69da99126e063038292f461f9fa6e46d152408ca5543d97d36ea935fb9e715\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:30.671576 containerd[1624]: time="2025-11-01T00:22:30.671367973Z" level=info msg="CreateContainer within sandbox \"86cd091752e2c3b8e4fb313dde1cea9709057643fabd7f5fe0ebe7ed10d836c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:30.675252 containerd[1624]: time="2025-11-01T00:22:30.675222074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-b21903d23a,Uid:3197ae34194f735040a200942411215a,Namespace:kube-system,Attempt:0,} returns sandbox id \"16396bb839c27f1abdb1e74b215f1f858d0e2d53ac7d299c49198ab1e3569b17\"" Nov 1 00:22:30.682314 containerd[1624]: time="2025-11-01T00:22:30.682207949Z" level=info msg="CreateContainer within sandbox \"16396bb839c27f1abdb1e74b215f1f858d0e2d53ac7d299c49198ab1e3569b17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:30.695402 containerd[1624]: time="2025-11-01T00:22:30.695211930Z" level=info msg="CreateContainer within sandbox \"86cd091752e2c3b8e4fb313dde1cea9709057643fabd7f5fe0ebe7ed10d836c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359\"" Nov 1 00:22:30.696117 containerd[1624]: time="2025-11-01T00:22:30.696087280Z" level=info msg="StartContainer for \"3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359\"" Nov 1 00:22:30.703308 containerd[1624]: time="2025-11-01T00:22:30.703271640Z" level=info msg="CreateContainer within sandbox \"ec69da99126e063038292f461f9fa6e46d152408ca5543d97d36ea935fb9e715\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230\"" Nov 1 00:22:30.704209 containerd[1624]: time="2025-11-01T00:22:30.703913764Z" level=info msg="StartContainer for \"188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230\"" Nov 1 00:22:30.705701 containerd[1624]: time="2025-11-01T00:22:30.705672650Z" level=info msg="CreateContainer within sandbox \"16396bb839c27f1abdb1e74b215f1f858d0e2d53ac7d299c49198ab1e3569b17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3747e810df7983c1ee26651936b39da65617de5012ceb2a96c74b9e660127bd7\"" Nov 1 00:22:30.706095 containerd[1624]: time="2025-11-01T00:22:30.706070123Z" level=info msg="StartContainer for \"3747e810df7983c1ee26651936b39da65617de5012ceb2a96c74b9e660127bd7\"" Nov 1 00:22:30.790460 containerd[1624]: time="2025-11-01T00:22:30.790412140Z" level=info msg="StartContainer for \"3747e810df7983c1ee26651936b39da65617de5012ceb2a96c74b9e660127bd7\" returns successfully" Nov 1 00:22:30.811575 containerd[1624]: time="2025-11-01T00:22:30.810810190Z" level=info msg="StartContainer for \"3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359\" returns successfully" Nov 1 00:22:30.824782 containerd[1624]: time="2025-11-01T00:22:30.824736878Z" level=info msg="StartContainer for \"188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230\" returns successfully" Nov 1 00:22:30.872077 kubelet[2368]: W1101 00:22:30.872016 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.62.149.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:30.872289 kubelet[2368]: E1101 00:22:30.872086 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.62.149.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:30.908824 kubelet[2368]: W1101 00:22:30.908636 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.62.149.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:30.908824 kubelet[2368]: E1101 00:22:30.908698 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.62.149.99:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:30.933503 kubelet[2368]: E1101 00:22:30.933445 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.149.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-b21903d23a?timeout=10s\": dial tcp 46.62.149.99:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:31.042835 kubelet[2368]: W1101 00:22:31.042755 2368 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.62.149.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-b21903d23a&limit=500&resourceVersion=0": dial tcp 46.62.149.99:6443: connect: connection refused Nov 1 00:22:31.043013 kubelet[2368]: E1101 00:22:31.042858 2368 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.62.149.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-b21903d23a&limit=500&resourceVersion=0\": dial tcp 46.62.149.99:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:31.096824 kubelet[2368]: I1101 00:22:31.096785 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:31.097108 kubelet[2368]: E1101 00:22:31.097080 2368 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.149.99:6443/api/v1/nodes\": dial tcp 46.62.149.99:6443: connect: connection refused" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:31.577544 kubelet[2368]: E1101 00:22:31.577508 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:31.585488 kubelet[2368]: E1101 00:22:31.585459 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:31.588309 kubelet[2368]: E1101 00:22:31.588269 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.542164 kubelet[2368]: E1101 00:22:32.542104 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.590705 kubelet[2368]: E1101 00:22:32.590466 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.590705 kubelet[2368]: E1101 00:22:32.590595 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.591620 kubelet[2368]: E1101 00:22:32.590944 2368 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-b21903d23a\" not found" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.682813 kubelet[2368]: E1101 00:22:32.682707 2368 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081-3-6-n-b21903d23a" not found Nov 1 00:22:32.700831 kubelet[2368]: I1101 00:22:32.700575 2368 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.715668 kubelet[2368]: I1101 00:22:32.715622 2368 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:32.715668 kubelet[2368]: E1101 00:22:32.715669 2368 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-b21903d23a\": node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:32.732031 kubelet[2368]: E1101 00:22:32.731919 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:32.832789 kubelet[2368]: E1101 00:22:32.832601 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:32.933445 kubelet[2368]: E1101 00:22:32.933393 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.033965 kubelet[2368]: E1101 00:22:33.033892 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.135205 kubelet[2368]: E1101 00:22:33.134999 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.235874 kubelet[2368]: E1101 00:22:33.235786 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.337222 kubelet[2368]: E1101 00:22:33.336906 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.437526 kubelet[2368]: E1101 00:22:33.437487 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.538141 kubelet[2368]: E1101 00:22:33.538089 2368 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-b21903d23a\" not found" Nov 1 00:22:33.589676 kubelet[2368]: I1101 00:22:33.589633 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.590304 kubelet[2368]: I1101 00:22:33.589985 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.630665 kubelet[2368]: I1101 00:22:33.630608 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.636514 kubelet[2368]: E1101 00:22:33.636453 2368 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.636514 kubelet[2368]: I1101 00:22:33.636500 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.642606 kubelet[2368]: I1101 00:22:33.641235 2368 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:33.645945 kubelet[2368]: E1101 00:22:33.645909 2368 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:34.438749 systemd[1]: Reloading requested from client PID 2644 ('systemctl') (unit session-7.scope)... Nov 1 00:22:34.438772 systemd[1]: Reloading... Nov 1 00:22:34.513550 kubelet[2368]: I1101 00:22:34.513509 2368 apiserver.go:52] "Watching apiserver" Nov 1 00:22:34.526164 zram_generator::config[2685]: No configuration found. Nov 1 00:22:34.531538 kubelet[2368]: I1101 00:22:34.531493 2368 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:34.630419 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:34.697995 systemd[1]: Reloading finished in 258 ms. Nov 1 00:22:34.729474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:34.736478 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:34.736732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:34.749647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:34.848411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:34.855790 (kubelet)[2745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:34.916208 kubelet[2745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:34.918035 kubelet[2745]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:34.918035 kubelet[2745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:34.918035 kubelet[2745]: I1101 00:22:34.916670 2745 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:34.923013 kubelet[2745]: I1101 00:22:34.922983 2745 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:34.923013 kubelet[2745]: I1101 00:22:34.923009 2745 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:34.923642 kubelet[2745]: I1101 00:22:34.923626 2745 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:34.927018 kubelet[2745]: I1101 00:22:34.927000 2745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:22:34.931854 kubelet[2745]: I1101 00:22:34.931717 2745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:34.934943 kubelet[2745]: E1101 00:22:34.934902 2745 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:34.935117 kubelet[2745]: I1101 00:22:34.935106 2745 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:34.938097 kubelet[2745]: I1101 00:22:34.938083 2745 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:34.939368 kubelet[2745]: I1101 00:22:34.939336 2745 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:34.939612 kubelet[2745]: I1101 00:22:34.939435 2745 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-b21903d23a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:22:34.939725 kubelet[2745]: I1101 00:22:34.939716 2745 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:34.939781 kubelet[2745]: I1101 00:22:34.939774 2745 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:34.939867 kubelet[2745]: I1101 00:22:34.939856 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:34.940072 kubelet[2745]: I1101 00:22:34.940046 2745 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:34.940857 kubelet[2745]: I1101 00:22:34.940843 2745 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:34.940941 kubelet[2745]: I1101 00:22:34.940933 2745 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:34.943197 kubelet[2745]: I1101 00:22:34.943178 2745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:34.946697 kubelet[2745]: I1101 00:22:34.946680 2745 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:34.949171 kubelet[2745]: I1101 00:22:34.949051 2745 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:34.959156 kubelet[2745]: I1101 00:22:34.959077 2745 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:34.959378 kubelet[2745]: I1101 00:22:34.959285 2745 server.go:1287] "Started kubelet" Nov 1 00:22:34.961943 kubelet[2745]: I1101 00:22:34.961269 2745 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:34.962425 kubelet[2745]: I1101 00:22:34.962355 2745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:34.962890 kubelet[2745]: I1101 00:22:34.962875 2745 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:34.965026 kubelet[2745]: I1101 00:22:34.964687 2745 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:34.967450 kubelet[2745]: I1101 00:22:34.967432 2745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:34.970031 kubelet[2745]: I1101 00:22:34.969738 2745 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:34.978822 kubelet[2745]: I1101 00:22:34.978787 2745 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:34.979767 kubelet[2745]: E1101 00:22:34.979625 2745 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:34.980708 kubelet[2745]: I1101 00:22:34.980663 2745 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:34.981060 kubelet[2745]: I1101 00:22:34.980814 2745 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:34.982251 kubelet[2745]: I1101 00:22:34.982177 2745 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:34.983621 kubelet[2745]: I1101 00:22:34.983428 2745 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:34.988406 kubelet[2745]: I1101 00:22:34.987219 2745 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:34.988656 kubelet[2745]: I1101 00:22:34.988634 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:34.989840 kubelet[2745]: I1101 00:22:34.989822 2745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:34.989952 kubelet[2745]: I1101 00:22:34.989940 2745 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:34.991960 kubelet[2745]: I1101 00:22:34.990681 2745 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:34.991960 kubelet[2745]: I1101 00:22:34.990691 2745 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:34.991960 kubelet[2745]: E1101 00:22:34.990744 2745 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:35.037532 kubelet[2745]: I1101 00:22:35.037499 2745 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:35.037532 kubelet[2745]: I1101 00:22:35.037518 2745 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:35.037532 kubelet[2745]: I1101 00:22:35.037535 2745 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:35.037711 kubelet[2745]: I1101 00:22:35.037687 2745 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:35.037764 kubelet[2745]: I1101 00:22:35.037708 2745 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:35.037764 kubelet[2745]: I1101 00:22:35.037760 2745 policy_none.go:49] "None policy: Start" Nov 1 00:22:35.037808 kubelet[2745]: I1101 00:22:35.037769 2745 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:35.037808 kubelet[2745]: I1101 00:22:35.037780 2745 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:35.037890 kubelet[2745]: I1101 00:22:35.037872 2745 state_mem.go:75] "Updated machine memory state" Nov 1 00:22:35.039189 kubelet[2745]: I1101 00:22:35.038760 2745 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:35.039189 kubelet[2745]: I1101 00:22:35.038887 2745 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:35.039189 kubelet[2745]: I1101 00:22:35.038896 2745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:35.040987 kubelet[2745]: I1101 00:22:35.040178 2745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:35.041637 kubelet[2745]: E1101 00:22:35.041592 2745 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:35.092000 kubelet[2745]: I1101 00:22:35.091946 2745 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.093360 kubelet[2745]: I1101 00:22:35.093174 2745 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.093360 kubelet[2745]: I1101 00:22:35.093224 2745 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.100897 kubelet[2745]: E1101 00:22:35.100597 2745 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.101760 kubelet[2745]: E1101 00:22:35.101576 2745 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.101760 kubelet[2745]: E1101 00:22:35.101649 2745 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.145648 kubelet[2745]: I1101 00:22:35.145615 2745 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.155464 kubelet[2745]: I1101 00:22:35.155314 2745 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.155464 kubelet[2745]: I1101 00:22:35.155410 2745 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185272 kubelet[2745]: I1101 00:22:35.185224 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185429 kubelet[2745]: I1101 00:22:35.185295 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185429 kubelet[2745]: I1101 00:22:35.185319 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185429 kubelet[2745]: I1101 00:22:35.185338 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185429 kubelet[2745]: I1101 00:22:35.185357 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185429 kubelet[2745]: I1101 00:22:35.185373 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7d33d855a6ea40d5810c5fe15382f787-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-b21903d23a\" (UID: \"7d33d855a6ea40d5810c5fe15382f787\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185560 kubelet[2745]: I1101 00:22:35.185389 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa766a77a2aad672ef8fc2509e0d3450-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-b21903d23a\" (UID: \"aa766a77a2aad672ef8fc2509e0d3450\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185560 kubelet[2745]: I1101 00:22:35.185408 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.185560 kubelet[2745]: I1101 00:22:35.185427 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3197ae34194f735040a200942411215a-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-b21903d23a\" (UID: \"3197ae34194f735040a200942411215a\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:35.945589 kubelet[2745]: I1101 00:22:35.943886 2745 apiserver.go:52] "Watching apiserver" Nov 1 00:22:35.979572 kubelet[2745]: I1101 00:22:35.979491 2745 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:36.016576 kubelet[2745]: I1101 00:22:36.016537 2745 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:36.027717 kubelet[2745]: E1101 00:22:36.027691 2745 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-b21903d23a\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" Nov 1 00:22:36.070970 kubelet[2745]: I1101 00:22:36.070591 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-b21903d23a" podStartSLOduration=3.070568573 podStartE2EDuration="3.070568573s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:36.058497454 +0000 UTC m=+1.197189939" watchObservedRunningTime="2025-11-01 00:22:36.070568573 +0000 UTC m=+1.209261079" Nov 1 00:22:36.083437 kubelet[2745]: I1101 00:22:36.083373 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-b21903d23a" podStartSLOduration=3.083188429 podStartE2EDuration="3.083188429s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:36.070917932 +0000 UTC m=+1.209610437" watchObservedRunningTime="2025-11-01 00:22:36.083188429 +0000 UTC m=+1.221880934" Nov 1 00:22:36.083593 kubelet[2745]: I1101 00:22:36.083512 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-b21903d23a" podStartSLOduration=3.083501005 podStartE2EDuration="3.083501005s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:36.081760656 +0000 UTC m=+1.220453161" watchObservedRunningTime="2025-11-01 00:22:36.083501005 +0000 UTC m=+1.222193509" Nov 1 00:22:36.387340 update_engine[1611]: I20251101 00:22:36.387163 1611 update_attempter.cc:509] Updating boot flags... Nov 1 00:22:36.475698 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2795) Nov 1 00:22:36.541237 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2798) Nov 1 00:22:40.046786 kubelet[2745]: I1101 00:22:40.046723 2745 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:40.047183 containerd[1624]: time="2025-11-01T00:22:40.047077741Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:40.047406 kubelet[2745]: I1101 00:22:40.047391 2745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:40.729642 kubelet[2745]: I1101 00:22:40.729601 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4978cc95-f985-4d31-b93a-348d19c25cac-kube-proxy\") pod \"kube-proxy-sk97n\" (UID: \"4978cc95-f985-4d31-b93a-348d19c25cac\") " pod="kube-system/kube-proxy-sk97n" Nov 1 00:22:40.729642 kubelet[2745]: I1101 00:22:40.729644 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4978cc95-f985-4d31-b93a-348d19c25cac-xtables-lock\") pod \"kube-proxy-sk97n\" (UID: \"4978cc95-f985-4d31-b93a-348d19c25cac\") " pod="kube-system/kube-proxy-sk97n" Nov 1 00:22:40.729810 kubelet[2745]: I1101 00:22:40.729668 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98rt4\" (UniqueName: \"kubernetes.io/projected/4978cc95-f985-4d31-b93a-348d19c25cac-kube-api-access-98rt4\") pod \"kube-proxy-sk97n\" (UID: \"4978cc95-f985-4d31-b93a-348d19c25cac\") " pod="kube-system/kube-proxy-sk97n" Nov 1 00:22:40.729810 kubelet[2745]: I1101 00:22:40.729706 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4978cc95-f985-4d31-b93a-348d19c25cac-lib-modules\") pod \"kube-proxy-sk97n\" (UID: \"4978cc95-f985-4d31-b93a-348d19c25cac\") " pod="kube-system/kube-proxy-sk97n" Nov 1 00:22:40.837039 kubelet[2745]: E1101 00:22:40.836981 2745 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:22:40.837039 kubelet[2745]: E1101 00:22:40.837050 2745 projected.go:194] Error preparing data for projected volume kube-api-access-98rt4 for pod kube-system/kube-proxy-sk97n: configmap "kube-root-ca.crt" not found Nov 1 00:22:40.837209 kubelet[2745]: E1101 00:22:40.837115 2745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4978cc95-f985-4d31-b93a-348d19c25cac-kube-api-access-98rt4 podName:4978cc95-f985-4d31-b93a-348d19c25cac nodeName:}" failed. No retries permitted until 2025-11-01 00:22:41.337091395 +0000 UTC m=+6.475783880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-98rt4" (UniqueName: "kubernetes.io/projected/4978cc95-f985-4d31-b93a-348d19c25cac-kube-api-access-98rt4") pod "kube-proxy-sk97n" (UID: "4978cc95-f985-4d31-b93a-348d19c25cac") : configmap "kube-root-ca.crt" not found Nov 1 00:22:41.233349 kubelet[2745]: I1101 00:22:41.233309 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9srgl\" (UniqueName: \"kubernetes.io/projected/9038c03e-9260-4f63-8a7f-2a9d5e557e3f-kube-api-access-9srgl\") pod \"tigera-operator-7dcd859c48-xz864\" (UID: \"9038c03e-9260-4f63-8a7f-2a9d5e557e3f\") " pod="tigera-operator/tigera-operator-7dcd859c48-xz864" Nov 1 00:22:41.233349 kubelet[2745]: I1101 00:22:41.233357 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9038c03e-9260-4f63-8a7f-2a9d5e557e3f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xz864\" (UID: \"9038c03e-9260-4f63-8a7f-2a9d5e557e3f\") " pod="tigera-operator/tigera-operator-7dcd859c48-xz864" Nov 1 00:22:41.518991 containerd[1624]: time="2025-11-01T00:22:41.518863997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xz864,Uid:9038c03e-9260-4f63-8a7f-2a9d5e557e3f,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:41.547881 containerd[1624]: time="2025-11-01T00:22:41.547481035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:41.547881 containerd[1624]: time="2025-11-01T00:22:41.547547525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:41.547881 containerd[1624]: time="2025-11-01T00:22:41.547569399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.547881 containerd[1624]: time="2025-11-01T00:22:41.547679796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.583181 containerd[1624]: time="2025-11-01T00:22:41.583017358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sk97n,Uid:4978cc95-f985-4d31-b93a-348d19c25cac,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.613754 containerd[1624]: time="2025-11-01T00:22:41.613699418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xz864,Uid:9038c03e-9260-4f63-8a7f-2a9d5e557e3f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a3d9f9611f29da0db158a72170134c00992d7f345d7a9d8d4063a7fec3d149e1\"" Nov 1 00:22:41.617475 containerd[1624]: time="2025-11-01T00:22:41.617325669Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:41.622214 containerd[1624]: time="2025-11-01T00:22:41.622026660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:41.622534 containerd[1624]: time="2025-11-01T00:22:41.622307935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:41.622534 containerd[1624]: time="2025-11-01T00:22:41.622339988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.623301 containerd[1624]: time="2025-11-01T00:22:41.623248149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.661594 containerd[1624]: time="2025-11-01T00:22:41.661507843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sk97n,Uid:4978cc95-f985-4d31-b93a-348d19c25cac,Namespace:kube-system,Attempt:0,} returns sandbox id \"30fd77238d69fab7cf4540bfbac035f75ea6d6e03a8871338696a3170d4b79ab\"" Nov 1 00:22:41.664915 containerd[1624]: time="2025-11-01T00:22:41.664847680Z" level=info msg="CreateContainer within sandbox \"30fd77238d69fab7cf4540bfbac035f75ea6d6e03a8871338696a3170d4b79ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:41.677420 containerd[1624]: time="2025-11-01T00:22:41.677359666Z" level=info msg="CreateContainer within sandbox \"30fd77238d69fab7cf4540bfbac035f75ea6d6e03a8871338696a3170d4b79ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ec19cfa8efe091c3576b491f53822fa8f1caacadac25de84d64d8fc8ced8a17\"" Nov 1 00:22:41.679353 containerd[1624]: time="2025-11-01T00:22:41.678155515Z" level=info msg="StartContainer for \"2ec19cfa8efe091c3576b491f53822fa8f1caacadac25de84d64d8fc8ced8a17\"" Nov 1 00:22:41.733321 containerd[1624]: time="2025-11-01T00:22:41.733214169Z" level=info msg="StartContainer for \"2ec19cfa8efe091c3576b491f53822fa8f1caacadac25de84d64d8fc8ced8a17\" returns successfully" Nov 1 00:22:42.062927 kubelet[2745]: I1101 00:22:42.062838 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sk97n" podStartSLOduration=2.062814773 podStartE2EDuration="2.062814773s" podCreationTimestamp="2025-11-01 00:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:42.050374344 +0000 UTC m=+7.189066828" watchObservedRunningTime="2025-11-01 00:22:42.062814773 +0000 UTC m=+7.201507258" Nov 1 00:22:43.922608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066013325.mount: Deactivated successfully. Nov 1 00:22:45.476829 containerd[1624]: time="2025-11-01T00:22:45.476764048Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.478079 containerd[1624]: time="2025-11-01T00:22:45.477918947Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:45.479966 containerd[1624]: time="2025-11-01T00:22:45.478907752Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.480761 containerd[1624]: time="2025-11-01T00:22:45.480736228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.481778 containerd[1624]: time="2025-11-01T00:22:45.481430567Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.864079236s" Nov 1 00:22:45.481778 containerd[1624]: time="2025-11-01T00:22:45.481456166Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:45.521177 containerd[1624]: time="2025-11-01T00:22:45.521118092Z" level=info msg="CreateContainer within sandbox \"a3d9f9611f29da0db158a72170134c00992d7f345d7a9d8d4063a7fec3d149e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:45.534528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407860562.mount: Deactivated successfully. Nov 1 00:22:45.535478 containerd[1624]: time="2025-11-01T00:22:45.535442352Z" level=info msg="CreateContainer within sandbox \"a3d9f9611f29da0db158a72170134c00992d7f345d7a9d8d4063a7fec3d149e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108\"" Nov 1 00:22:45.536053 containerd[1624]: time="2025-11-01T00:22:45.536036003Z" level=info msg="StartContainer for \"ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108\"" Nov 1 00:22:45.591264 containerd[1624]: time="2025-11-01T00:22:45.591222056Z" level=info msg="StartContainer for \"ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108\" returns successfully" Nov 1 00:22:46.060103 kubelet[2745]: I1101 00:22:46.059593 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xz864" podStartSLOduration=1.180526666 podStartE2EDuration="5.059564729s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="2025-11-01 00:22:41.615212581 +0000 UTC m=+6.753905067" lastFinishedPulling="2025-11-01 00:22:45.494250645 +0000 UTC m=+10.632943130" observedRunningTime="2025-11-01 00:22:46.059422972 +0000 UTC m=+11.198115508" watchObservedRunningTime="2025-11-01 00:22:46.059564729 +0000 UTC m=+11.198257224" Nov 1 00:22:51.820244 sudo[1871]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:52.006675 sshd[1847]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:52.012384 systemd[1]: sshd@6-46.62.149.99:22-147.75.109.163:38856.service: Deactivated successfully. Nov 1 00:22:52.020105 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:22:52.023637 systemd-logind[1610]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:22:52.025568 systemd-logind[1610]: Removed session 7. Nov 1 00:22:56.445610 kubelet[2745]: I1101 00:22:56.445521 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd576493-1896-4fbd-9758-f089098903ae-tigera-ca-bundle\") pod \"calico-typha-6756f9876d-tccbv\" (UID: \"dd576493-1896-4fbd-9758-f089098903ae\") " pod="calico-system/calico-typha-6756f9876d-tccbv" Nov 1 00:22:56.445610 kubelet[2745]: I1101 00:22:56.445589 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd576493-1896-4fbd-9758-f089098903ae-typha-certs\") pod \"calico-typha-6756f9876d-tccbv\" (UID: \"dd576493-1896-4fbd-9758-f089098903ae\") " pod="calico-system/calico-typha-6756f9876d-tccbv" Nov 1 00:22:56.445610 kubelet[2745]: I1101 00:22:56.445622 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5q4\" (UniqueName: \"kubernetes.io/projected/dd576493-1896-4fbd-9758-f089098903ae-kube-api-access-vr5q4\") pod \"calico-typha-6756f9876d-tccbv\" (UID: \"dd576493-1896-4fbd-9758-f089098903ae\") " pod="calico-system/calico-typha-6756f9876d-tccbv" Nov 1 00:22:56.649724 kubelet[2745]: I1101 00:22:56.649468 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-flexvol-driver-host\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649724 kubelet[2745]: I1101 00:22:56.649628 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jbg\" (UniqueName: \"kubernetes.io/projected/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-kube-api-access-z8jbg\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649724 kubelet[2745]: I1101 00:22:56.649654 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-xtables-lock\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649963 kubelet[2745]: I1101 00:22:56.649744 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-lib-modules\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649963 kubelet[2745]: I1101 00:22:56.649773 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-var-lib-calico\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649963 kubelet[2745]: I1101 00:22:56.649827 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-node-certs\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.649963 kubelet[2745]: I1101 00:22:56.649852 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-policysync\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.650043 kubelet[2745]: I1101 00:22:56.649977 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-cni-net-dir\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.650043 kubelet[2745]: I1101 00:22:56.649999 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-cni-bin-dir\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.650080 kubelet[2745]: I1101 00:22:56.650016 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-cni-log-dir\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.650100 kubelet[2745]: I1101 00:22:56.650084 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-tigera-ca-bundle\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.650851 kubelet[2745]: I1101 00:22:56.650180 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7276ca3b-6e2e-437c-a988-ca17eba1b2cd-var-run-calico\") pod \"calico-node-m2jz8\" (UID: \"7276ca3b-6e2e-437c-a988-ca17eba1b2cd\") " pod="calico-system/calico-node-m2jz8" Nov 1 00:22:56.678219 containerd[1624]: time="2025-11-01T00:22:56.677780337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6756f9876d-tccbv,Uid:dd576493-1896-4fbd-9758-f089098903ae,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:56.746785 containerd[1624]: time="2025-11-01T00:22:56.745097901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.747881 containerd[1624]: time="2025-11-01T00:22:56.747663750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.747881 containerd[1624]: time="2025-11-01T00:22:56.747726982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.748064 containerd[1624]: time="2025-11-01T00:22:56.747860239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.797844 kubelet[2745]: E1101 00:22:56.796829 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:22:56.816951 kubelet[2745]: E1101 00:22:56.813750 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.816951 kubelet[2745]: W1101 00:22:56.813775 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.816951 kubelet[2745]: E1101 00:22:56.813805 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.819961 kubelet[2745]: E1101 00:22:56.819722 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.820723 kubelet[2745]: W1101 00:22:56.820676 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.821481 kubelet[2745]: E1101 00:22:56.821188 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.837073 kubelet[2745]: E1101 00:22:56.837025 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.837073 kubelet[2745]: W1101 00:22:56.837065 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.837073 kubelet[2745]: E1101 00:22:56.837085 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.837582 kubelet[2745]: E1101 00:22:56.837318 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.837582 kubelet[2745]: W1101 00:22:56.837326 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.837582 kubelet[2745]: E1101 00:22:56.837335 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.837582 kubelet[2745]: E1101 00:22:56.837462 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.837582 kubelet[2745]: W1101 00:22:56.837471 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.837582 kubelet[2745]: E1101 00:22:56.837479 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.837615 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.839256 kubelet[2745]: W1101 00:22:56.837621 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.837628 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.837872 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.839256 kubelet[2745]: W1101 00:22:56.837880 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.837888 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.838159 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.839256 kubelet[2745]: W1101 00:22:56.838167 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.838175 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.839256 kubelet[2745]: E1101 00:22:56.838474 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.839462 kubelet[2745]: W1101 00:22:56.838482 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.839462 kubelet[2745]: E1101 00:22:56.838489 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.840973 kubelet[2745]: E1101 00:22:56.840862 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.840973 kubelet[2745]: W1101 00:22:56.840871 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.840973 kubelet[2745]: E1101 00:22:56.840880 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.841775 kubelet[2745]: E1101 00:22:56.841360 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.841775 kubelet[2745]: W1101 00:22:56.841368 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.841775 kubelet[2745]: E1101 00:22:56.841377 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.842182 kubelet[2745]: E1101 00:22:56.841913 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.842182 kubelet[2745]: W1101 00:22:56.841927 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.842182 kubelet[2745]: E1101 00:22:56.841935 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.843426 kubelet[2745]: E1101 00:22:56.843402 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.843426 kubelet[2745]: W1101 00:22:56.843418 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.843426 kubelet[2745]: E1101 00:22:56.843428 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.843785 kubelet[2745]: E1101 00:22:56.843765 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.843785 kubelet[2745]: W1101 00:22:56.843780 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.843840 kubelet[2745]: E1101 00:22:56.843788 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.844155 kubelet[2745]: E1101 00:22:56.843957 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.844155 kubelet[2745]: W1101 00:22:56.843967 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.844155 kubelet[2745]: E1101 00:22:56.843975 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.844155 kubelet[2745]: E1101 00:22:56.844155 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.844294 kubelet[2745]: W1101 00:22:56.844163 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.844294 kubelet[2745]: E1101 00:22:56.844171 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.844356 kubelet[2745]: E1101 00:22:56.844303 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.844356 kubelet[2745]: W1101 00:22:56.844329 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.844356 kubelet[2745]: E1101 00:22:56.844336 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.844497 kubelet[2745]: E1101 00:22:56.844466 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.844497 kubelet[2745]: W1101 00:22:56.844495 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.844546 kubelet[2745]: E1101 00:22:56.844503 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844628 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.845165 kubelet[2745]: W1101 00:22:56.844636 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844643 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844740 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.845165 kubelet[2745]: W1101 00:22:56.844746 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844752 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844859 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.845165 kubelet[2745]: W1101 00:22:56.844865 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.844872 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.845165 kubelet[2745]: E1101 00:22:56.845001 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.845466 kubelet[2745]: W1101 00:22:56.845007 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.845466 kubelet[2745]: E1101 00:22:56.845014 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.851332 containerd[1624]: time="2025-11-01T00:22:56.851293905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6756f9876d-tccbv,Uid:dd576493-1896-4fbd-9758-f089098903ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"92b8f1df1261b22066271913bb3aa7dc912856b88932581674ffb2ed0fd4660a\"" Nov 1 00:22:56.854646 kubelet[2745]: E1101 00:22:56.854601 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.854646 kubelet[2745]: W1101 00:22:56.854614 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.855005 kubelet[2745]: E1101 00:22:56.854740 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.855005 kubelet[2745]: I1101 00:22:56.854770 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2fb3e683-810b-4091-a4c8-6fa869de6607-socket-dir\") pod \"csi-node-driver-jnx62\" (UID: \"2fb3e683-810b-4091-a4c8-6fa869de6607\") " pod="calico-system/csi-node-driver-jnx62" Nov 1 00:22:56.855264 containerd[1624]: time="2025-11-01T00:22:56.855226339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:22:56.855537 kubelet[2745]: E1101 00:22:56.855448 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.855537 kubelet[2745]: W1101 00:22:56.855461 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.855537 kubelet[2745]: E1101 00:22:56.855476 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.856238 kubelet[2745]: E1101 00:22:56.856078 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.856238 kubelet[2745]: W1101 00:22:56.856090 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.856238 kubelet[2745]: E1101 00:22:56.856099 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.856238 kubelet[2745]: I1101 00:22:56.856200 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2fb3e683-810b-4091-a4c8-6fa869de6607-registration-dir\") pod \"csi-node-driver-jnx62\" (UID: \"2fb3e683-810b-4091-a4c8-6fa869de6607\") " pod="calico-system/csi-node-driver-jnx62" Nov 1 00:22:56.856784 kubelet[2745]: E1101 00:22:56.856704 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.856784 kubelet[2745]: W1101 00:22:56.856723 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.856930 kubelet[2745]: E1101 00:22:56.856731 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.857405 kubelet[2745]: E1101 00:22:56.857353 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.857405 kubelet[2745]: W1101 00:22:56.857364 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.857405 kubelet[2745]: E1101 00:22:56.857373 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.858232 kubelet[2745]: E1101 00:22:56.858159 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.858232 kubelet[2745]: W1101 00:22:56.858192 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.858511 kubelet[2745]: E1101 00:22:56.858402 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.858681 kubelet[2745]: E1101 00:22:56.858671 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.858816 kubelet[2745]: W1101 00:22:56.858753 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.858816 kubelet[2745]: E1101 00:22:56.858766 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.858816 kubelet[2745]: I1101 00:22:56.858781 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2fb3e683-810b-4091-a4c8-6fa869de6607-varrun\") pod \"csi-node-driver-jnx62\" (UID: \"2fb3e683-810b-4091-a4c8-6fa869de6607\") " pod="calico-system/csi-node-driver-jnx62" Nov 1 00:22:56.859360 kubelet[2745]: E1101 00:22:56.859291 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.859360 kubelet[2745]: W1101 00:22:56.859302 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.859360 kubelet[2745]: E1101 00:22:56.859325 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.859660 kubelet[2745]: I1101 00:22:56.859581 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hblht\" (UniqueName: \"kubernetes.io/projected/2fb3e683-810b-4091-a4c8-6fa869de6607-kube-api-access-hblht\") pod \"csi-node-driver-jnx62\" (UID: \"2fb3e683-810b-4091-a4c8-6fa869de6607\") " pod="calico-system/csi-node-driver-jnx62" Nov 1 00:22:56.860057 kubelet[2745]: E1101 00:22:56.859952 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.860057 kubelet[2745]: W1101 00:22:56.859964 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.860057 kubelet[2745]: E1101 00:22:56.859979 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.860284 kubelet[2745]: E1101 00:22:56.860246 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.860284 kubelet[2745]: W1101 00:22:56.860257 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.860444 kubelet[2745]: E1101 00:22:56.860340 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.860733 kubelet[2745]: E1101 00:22:56.860663 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.860733 kubelet[2745]: W1101 00:22:56.860673 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.860733 kubelet[2745]: E1101 00:22:56.860687 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.860733 kubelet[2745]: I1101 00:22:56.860701 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fb3e683-810b-4091-a4c8-6fa869de6607-kubelet-dir\") pod \"csi-node-driver-jnx62\" (UID: \"2fb3e683-810b-4091-a4c8-6fa869de6607\") " pod="calico-system/csi-node-driver-jnx62" Nov 1 00:22:56.860962 kubelet[2745]: E1101 00:22:56.860917 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.860962 kubelet[2745]: W1101 00:22:56.860956 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.861104 kubelet[2745]: E1101 00:22:56.860982 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.861293 kubelet[2745]: E1101 00:22:56.861119 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.861293 kubelet[2745]: W1101 00:22:56.861147 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.861293 kubelet[2745]: E1101 00:22:56.861166 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.861678 kubelet[2745]: E1101 00:22:56.861510 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.861678 kubelet[2745]: W1101 00:22:56.861522 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.861678 kubelet[2745]: E1101 00:22:56.861530 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.862762 kubelet[2745]: E1101 00:22:56.862752 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.862837 kubelet[2745]: W1101 00:22:56.862828 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.862907 kubelet[2745]: E1101 00:22:56.862874 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.871511 containerd[1624]: time="2025-11-01T00:22:56.871469214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2jz8,Uid:7276ca3b-6e2e-437c-a988-ca17eba1b2cd,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:56.928661 containerd[1624]: time="2025-11-01T00:22:56.928317750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:56.928661 containerd[1624]: time="2025-11-01T00:22:56.928405720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:56.928661 containerd[1624]: time="2025-11-01T00:22:56.928420028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.928661 containerd[1624]: time="2025-11-01T00:22:56.928540690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:56.961939 kubelet[2745]: E1101 00:22:56.961599 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.961939 kubelet[2745]: W1101 00:22:56.961622 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.961939 kubelet[2745]: E1101 00:22:56.961840 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.962954 kubelet[2745]: E1101 00:22:56.962724 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.963136 kubelet[2745]: W1101 00:22:56.962760 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.963136 kubelet[2745]: E1101 00:22:56.963029 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.963593 kubelet[2745]: E1101 00:22:56.963512 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.963593 kubelet[2745]: W1101 00:22:56.963521 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.963788 kubelet[2745]: E1101 00:22:56.963683 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.964347 kubelet[2745]: E1101 00:22:56.964253 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.964347 kubelet[2745]: W1101 00:22:56.964263 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.964487 kubelet[2745]: E1101 00:22:56.964441 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.965556 kubelet[2745]: E1101 00:22:56.965314 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.965556 kubelet[2745]: W1101 00:22:56.965324 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.965556 kubelet[2745]: E1101 00:22:56.965345 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.965945 kubelet[2745]: E1101 00:22:56.965934 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.966099 kubelet[2745]: W1101 00:22:56.966023 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.966682 kubelet[2745]: E1101 00:22:56.966410 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.966682 kubelet[2745]: E1101 00:22:56.966572 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.966682 kubelet[2745]: W1101 00:22:56.966579 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.967238 kubelet[2745]: E1101 00:22:56.966883 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.967902 kubelet[2745]: E1101 00:22:56.967891 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.968009 kubelet[2745]: W1101 00:22:56.967999 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.968331 kubelet[2745]: E1101 00:22:56.968250 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.968970 kubelet[2745]: E1101 00:22:56.968834 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.968970 kubelet[2745]: W1101 00:22:56.968845 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.968970 kubelet[2745]: E1101 00:22:56.968853 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.971364 kubelet[2745]: E1101 00:22:56.971223 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.971364 kubelet[2745]: W1101 00:22:56.971254 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.971547 kubelet[2745]: E1101 00:22:56.971424 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.971547 kubelet[2745]: W1101 00:22:56.971432 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.971815 kubelet[2745]: E1101 00:22:56.971719 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.971815 kubelet[2745]: W1101 00:22:56.971729 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.971815 kubelet[2745]: E1101 00:22:56.971737 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.972251 kubelet[2745]: E1101 00:22:56.972159 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.972251 kubelet[2745]: W1101 00:22:56.972176 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.972251 kubelet[2745]: E1101 00:22:56.972190 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.972863 kubelet[2745]: E1101 00:22:56.972846 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.973216 kubelet[2745]: W1101 00:22:56.973180 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.973388 kubelet[2745]: E1101 00:22:56.973307 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.973624 kubelet[2745]: E1101 00:22:56.973165 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.973775 kubelet[2745]: E1101 00:22:56.973155 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.975326 kubelet[2745]: E1101 00:22:56.975149 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.975326 kubelet[2745]: W1101 00:22:56.975171 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.975326 kubelet[2745]: E1101 00:22:56.975182 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.975654 kubelet[2745]: E1101 00:22:56.975505 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.975654 kubelet[2745]: W1101 00:22:56.975519 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.975654 kubelet[2745]: E1101 00:22:56.975532 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.976380 kubelet[2745]: E1101 00:22:56.976346 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.976380 kubelet[2745]: W1101 00:22:56.976362 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.976619 kubelet[2745]: E1101 00:22:56.976605 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.976991 kubelet[2745]: E1101 00:22:56.976719 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.977180 kubelet[2745]: W1101 00:22:56.977077 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.977425 kubelet[2745]: E1101 00:22:56.977347 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.977995 kubelet[2745]: E1101 00:22:56.977913 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.977995 kubelet[2745]: W1101 00:22:56.977927 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.979381 kubelet[2745]: E1101 00:22:56.979203 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.979795 kubelet[2745]: E1101 00:22:56.979543 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.979795 kubelet[2745]: W1101 00:22:56.979574 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.979795 kubelet[2745]: E1101 00:22:56.979697 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.979911 kubelet[2745]: E1101 00:22:56.979831 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.979911 kubelet[2745]: W1101 00:22:56.979848 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.980248 kubelet[2745]: E1101 00:22:56.979992 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.980248 kubelet[2745]: W1101 00:22:56.980000 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.980248 kubelet[2745]: E1101 00:22:56.980008 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.980248 kubelet[2745]: E1101 00:22:56.979993 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.980248 kubelet[2745]: E1101 00:22:56.980203 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.980248 kubelet[2745]: W1101 00:22:56.980212 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.980248 kubelet[2745]: E1101 00:22:56.980237 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.981340 kubelet[2745]: E1101 00:22:56.980388 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.981340 kubelet[2745]: W1101 00:22:56.980395 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.981340 kubelet[2745]: E1101 00:22:56.980401 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.981340 kubelet[2745]: E1101 00:22:56.980533 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.981340 kubelet[2745]: W1101 00:22:56.980540 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.981340 kubelet[2745]: E1101 00:22:56.980546 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.984667 kubelet[2745]: E1101 00:22:56.984643 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:56.984667 kubelet[2745]: W1101 00:22:56.984657 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:56.984667 kubelet[2745]: E1101 00:22:56.984668 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:56.985222 containerd[1624]: time="2025-11-01T00:22:56.984870545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2jz8,Uid:7276ca3b-6e2e-437c-a988-ca17eba1b2cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\"" Nov 1 00:22:58.685978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount586896257.mount: Deactivated successfully. Nov 1 00:22:58.993820 kubelet[2745]: E1101 00:22:58.992474 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:22:59.119392 containerd[1624]: time="2025-11-01T00:22:59.119317603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.120573 containerd[1624]: time="2025-11-01T00:22:59.120378905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:22:59.122563 containerd[1624]: time="2025-11-01T00:22:59.121429947Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.123588 containerd[1624]: time="2025-11-01T00:22:59.123496543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.124041 containerd[1624]: time="2025-11-01T00:22:59.124020420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.268761488s" Nov 1 00:22:59.124248 containerd[1624]: time="2025-11-01T00:22:59.124155760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:22:59.125587 containerd[1624]: time="2025-11-01T00:22:59.125470530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:22:59.144091 containerd[1624]: time="2025-11-01T00:22:59.144029745Z" level=info msg="CreateContainer within sandbox \"92b8f1df1261b22066271913bb3aa7dc912856b88932581674ffb2ed0fd4660a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:22:59.180427 containerd[1624]: time="2025-11-01T00:22:59.180357064Z" level=info msg="CreateContainer within sandbox \"92b8f1df1261b22066271913bb3aa7dc912856b88932581674ffb2ed0fd4660a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"50c7dc180d0af60efa843a5623b62fc6a1acb773624ddd5a2dca12aabe6d078b\"" Nov 1 00:22:59.181607 containerd[1624]: time="2025-11-01T00:22:59.181061238Z" level=info msg="StartContainer for \"50c7dc180d0af60efa843a5623b62fc6a1acb773624ddd5a2dca12aabe6d078b\"" Nov 1 00:22:59.280488 containerd[1624]: time="2025-11-01T00:22:59.280080346Z" level=info msg="StartContainer for \"50c7dc180d0af60efa843a5623b62fc6a1acb773624ddd5a2dca12aabe6d078b\" returns successfully" Nov 1 00:23:00.168493 kubelet[2745]: E1101 00:23:00.168427 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.169458 kubelet[2745]: W1101 00:23:00.169300 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.169458 kubelet[2745]: E1101 00:23:00.169340 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.169582 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.170676 kubelet[2745]: W1101 00:23:00.169594 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.169608 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.169880 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.170676 kubelet[2745]: W1101 00:23:00.169893 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.169908 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.170314 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.170676 kubelet[2745]: W1101 00:23:00.170327 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.170676 kubelet[2745]: E1101 00:23:00.170374 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.170887 kubelet[2745]: E1101 00:23:00.170686 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.170887 kubelet[2745]: W1101 00:23:00.170699 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.170887 kubelet[2745]: E1101 00:23:00.170711 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.171092 kubelet[2745]: E1101 00:23:00.171069 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.171092 kubelet[2745]: W1101 00:23:00.171090 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.171196 kubelet[2745]: E1101 00:23:00.171104 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.171551 kubelet[2745]: E1101 00:23:00.171528 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.171551 kubelet[2745]: W1101 00:23:00.171547 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.171632 kubelet[2745]: E1101 00:23:00.171559 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.171898 kubelet[2745]: E1101 00:23:00.171789 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.171898 kubelet[2745]: W1101 00:23:00.171801 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.171898 kubelet[2745]: E1101 00:23:00.171813 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.172284 kubelet[2745]: E1101 00:23:00.172261 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.172284 kubelet[2745]: W1101 00:23:00.172280 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.172382 kubelet[2745]: E1101 00:23:00.172296 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.172476 kubelet[2745]: E1101 00:23:00.172457 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.172476 kubelet[2745]: W1101 00:23:00.172469 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.172476 kubelet[2745]: E1101 00:23:00.172478 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.172673 kubelet[2745]: E1101 00:23:00.172648 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.172673 kubelet[2745]: W1101 00:23:00.172664 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.172673 kubelet[2745]: E1101 00:23:00.172672 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.172983 kubelet[2745]: E1101 00:23:00.172877 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.172983 kubelet[2745]: W1101 00:23:00.172890 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.172983 kubelet[2745]: E1101 00:23:00.172900 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.173252 kubelet[2745]: E1101 00:23:00.173226 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.173252 kubelet[2745]: W1101 00:23:00.173240 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.173252 kubelet[2745]: E1101 00:23:00.173252 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.173547 kubelet[2745]: E1101 00:23:00.173395 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.173547 kubelet[2745]: W1101 00:23:00.173404 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.173547 kubelet[2745]: E1101 00:23:00.173412 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.173707 kubelet[2745]: E1101 00:23:00.173569 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.173707 kubelet[2745]: W1101 00:23:00.173578 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.173707 kubelet[2745]: E1101 00:23:00.173588 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.200943 kubelet[2745]: E1101 00:23:00.200908 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.200943 kubelet[2745]: W1101 00:23:00.200938 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.202314 kubelet[2745]: E1101 00:23:00.200966 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.202314 kubelet[2745]: E1101 00:23:00.201478 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.202314 kubelet[2745]: W1101 00:23:00.201490 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.202314 kubelet[2745]: E1101 00:23:00.201535 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.202314 kubelet[2745]: E1101 00:23:00.201959 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.202314 kubelet[2745]: W1101 00:23:00.201986 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.202836 kubelet[2745]: E1101 00:23:00.202010 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.203429 kubelet[2745]: E1101 00:23:00.203382 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.203429 kubelet[2745]: W1101 00:23:00.203397 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.203429 kubelet[2745]: E1101 00:23:00.203414 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.204060 kubelet[2745]: E1101 00:23:00.204030 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.204060 kubelet[2745]: W1101 00:23:00.204050 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.204258 kubelet[2745]: E1101 00:23:00.204200 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.204598 kubelet[2745]: E1101 00:23:00.204481 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.204598 kubelet[2745]: W1101 00:23:00.204495 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.204922 kubelet[2745]: E1101 00:23:00.204856 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.205439 kubelet[2745]: E1101 00:23:00.205394 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.205439 kubelet[2745]: W1101 00:23:00.205407 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.205590 kubelet[2745]: E1101 00:23:00.205502 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.205678 kubelet[2745]: E1101 00:23:00.205630 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.205678 kubelet[2745]: W1101 00:23:00.205648 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.205758 kubelet[2745]: E1101 00:23:00.205744 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.206026 kubelet[2745]: E1101 00:23:00.206002 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.206026 kubelet[2745]: W1101 00:23:00.206022 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.206098 kubelet[2745]: E1101 00:23:00.206040 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.206382 kubelet[2745]: E1101 00:23:00.206352 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.206382 kubelet[2745]: W1101 00:23:00.206384 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.206479 kubelet[2745]: E1101 00:23:00.206399 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.206665 kubelet[2745]: E1101 00:23:00.206650 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.206665 kubelet[2745]: W1101 00:23:00.206663 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.206752 kubelet[2745]: E1101 00:23:00.206689 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.207033 kubelet[2745]: E1101 00:23:00.207019 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.207033 kubelet[2745]: W1101 00:23:00.207032 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.207175 kubelet[2745]: E1101 00:23:00.207045 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.207552 kubelet[2745]: E1101 00:23:00.207532 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.207552 kubelet[2745]: W1101 00:23:00.207546 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.207661 kubelet[2745]: E1101 00:23:00.207560 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.207773 kubelet[2745]: E1101 00:23:00.207751 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.207773 kubelet[2745]: W1101 00:23:00.207765 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.207848 kubelet[2745]: E1101 00:23:00.207778 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.208077 kubelet[2745]: E1101 00:23:00.208056 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.208077 kubelet[2745]: W1101 00:23:00.208071 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.208208 kubelet[2745]: E1101 00:23:00.208092 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.208396 kubelet[2745]: E1101 00:23:00.208374 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.208396 kubelet[2745]: W1101 00:23:00.208389 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.208511 kubelet[2745]: E1101 00:23:00.208417 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.208798 kubelet[2745]: E1101 00:23:00.208775 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.208798 kubelet[2745]: W1101 00:23:00.208791 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.208877 kubelet[2745]: E1101 00:23:00.208811 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.209056 kubelet[2745]: E1101 00:23:00.209029 2745 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.209056 kubelet[2745]: W1101 00:23:00.209047 2745 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.209056 kubelet[2745]: E1101 00:23:00.209057 2745 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.729954 containerd[1624]: time="2025-11-01T00:23:00.729888064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.731293 containerd[1624]: time="2025-11-01T00:23:00.731232819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:23:00.732308 containerd[1624]: time="2025-11-01T00:23:00.732222862Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.735265 containerd[1624]: time="2025-11-01T00:23:00.734354731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.735265 containerd[1624]: time="2025-11-01T00:23:00.734944634Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.609448185s" Nov 1 00:23:00.735265 containerd[1624]: time="2025-11-01T00:23:00.734969012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:23:00.737195 containerd[1624]: time="2025-11-01T00:23:00.737163250Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:00.757354 containerd[1624]: time="2025-11-01T00:23:00.757306054Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab\"" Nov 1 00:23:00.759301 containerd[1624]: time="2025-11-01T00:23:00.759095494Z" level=info msg="StartContainer for \"271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab\"" Nov 1 00:23:00.784732 systemd[1]: run-containerd-runc-k8s.io-271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab-runc.7XWFUi.mount: Deactivated successfully. Nov 1 00:23:00.831762 containerd[1624]: time="2025-11-01T00:23:00.831602707Z" level=info msg="StartContainer for \"271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab\" returns successfully" Nov 1 00:23:00.944906 containerd[1624]: time="2025-11-01T00:23:00.918119393Z" level=info msg="shim disconnected" id=271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab namespace=k8s.io Nov 1 00:23:00.944906 containerd[1624]: time="2025-11-01T00:23:00.944885671Z" level=warning msg="cleaning up after shim disconnected" id=271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab namespace=k8s.io Nov 1 00:23:00.944906 containerd[1624]: time="2025-11-01T00:23:00.944909327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:00.992522 kubelet[2745]: E1101 00:23:00.991752 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:01.118139 kubelet[2745]: I1101 00:23:01.118091 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:01.122205 containerd[1624]: time="2025-11-01T00:23:01.120803763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:01.140616 kubelet[2745]: I1101 00:23:01.140540 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6756f9876d-tccbv" podStartSLOduration=2.868630758 podStartE2EDuration="5.140523408s" podCreationTimestamp="2025-11-01 00:22:56 +0000 UTC" firstStartedPulling="2025-11-01 00:22:56.853400818 +0000 UTC m=+21.992093304" lastFinishedPulling="2025-11-01 00:22:59.12529344 +0000 UTC m=+24.263985954" observedRunningTime="2025-11-01 00:23:00.129417063 +0000 UTC m=+25.268109568" watchObservedRunningTime="2025-11-01 00:23:01.140523408 +0000 UTC m=+26.279215893" Nov 1 00:23:01.747150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-271b167acc6d134aa7bb97c97fc8b6bc9f2e893c6d9fd3cc1e8096313773b9ab-rootfs.mount: Deactivated successfully. Nov 1 00:23:02.992928 kubelet[2745]: E1101 00:23:02.992417 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:03.754265 containerd[1624]: time="2025-11-01T00:23:03.754202013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:03.756981 containerd[1624]: time="2025-11-01T00:23:03.756913225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:23:03.759910 containerd[1624]: time="2025-11-01T00:23:03.759717607Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:03.798217 containerd[1624]: time="2025-11-01T00:23:03.798071832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:03.799221 containerd[1624]: time="2025-11-01T00:23:03.798489884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.677651545s" Nov 1 00:23:03.799221 containerd[1624]: time="2025-11-01T00:23:03.798532646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:23:03.803337 containerd[1624]: time="2025-11-01T00:23:03.803212536Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:03.839840 containerd[1624]: time="2025-11-01T00:23:03.839764544Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e\"" Nov 1 00:23:03.840645 containerd[1624]: time="2025-11-01T00:23:03.840544090Z" level=info msg="StartContainer for \"843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e\"" Nov 1 00:23:03.920397 containerd[1624]: time="2025-11-01T00:23:03.920229914Z" level=info msg="StartContainer for \"843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e\" returns successfully" Nov 1 00:23:04.452638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e-rootfs.mount: Deactivated successfully. Nov 1 00:23:04.459101 containerd[1624]: time="2025-11-01T00:23:04.458892567Z" level=info msg="shim disconnected" id=843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e namespace=k8s.io Nov 1 00:23:04.459975 containerd[1624]: time="2025-11-01T00:23:04.459791111Z" level=warning msg="cleaning up after shim disconnected" id=843a34ba936af8a8db332db2bad46ffa6a39b0b512a4c3b9cf3137d57f7a495e namespace=k8s.io Nov 1 00:23:04.459975 containerd[1624]: time="2025-11-01T00:23:04.459809165Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:04.463210 kubelet[2745]: I1101 00:23:04.462521 2745 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:23:04.529301 kubelet[2745]: I1101 00:23:04.528225 2745 status_manager.go:890] "Failed to get status for pod" podUID="16c60fed-179e-4b9b-b5f3-3af5fa94c7e7" pod="kube-system/coredns-668d6bf9bc-hbtgd" err="pods \"coredns-668d6bf9bc-hbtgd\" is forbidden: User \"system:node:ci-4081-3-6-n-b21903d23a\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-6-n-b21903d23a' and this object" Nov 1 00:23:04.646675 kubelet[2745]: I1101 00:23:04.646578 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/16c60fed-179e-4b9b-b5f3-3af5fa94c7e7-config-volume\") pod \"coredns-668d6bf9bc-hbtgd\" (UID: \"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7\") " pod="kube-system/coredns-668d6bf9bc-hbtgd" Nov 1 00:23:04.646675 kubelet[2745]: I1101 00:23:04.646644 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aad50b7-9c5b-4c75-b82d-9cd68d392290-goldmane-ca-bundle\") pod \"goldmane-666569f655-lrfg9\" (UID: \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\") " pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:04.646675 kubelet[2745]: I1101 00:23:04.646679 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdz2\" (UniqueName: \"kubernetes.io/projected/7a5e4241-2b02-4d05-aee8-621954146083-kube-api-access-rfdz2\") pod \"calico-apiserver-5cd88c66c7-t86s4\" (UID: \"7a5e4241-2b02-4d05-aee8-621954146083\") " pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" Nov 1 00:23:04.647652 kubelet[2745]: I1101 00:23:04.646704 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3349d8c7-91f7-48f7-a15a-d52d578f2952-config-volume\") pod \"coredns-668d6bf9bc-n2bnk\" (UID: \"3349d8c7-91f7-48f7-a15a-d52d578f2952\") " pod="kube-system/coredns-668d6bf9bc-n2bnk" Nov 1 00:23:04.647652 kubelet[2745]: I1101 00:23:04.646725 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-ca-bundle\") pod \"whisker-dd6f966-7pmfv\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " pod="calico-system/whisker-dd6f966-7pmfv" Nov 1 00:23:04.647652 kubelet[2745]: I1101 00:23:04.646745 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh9z\" (UniqueName: \"kubernetes.io/projected/86457ed6-a969-4f17-a69a-681dcab352cc-kube-api-access-2bh9z\") pod \"calico-apiserver-5cd88c66c7-sqhhf\" (UID: \"86457ed6-a969-4f17-a69a-681dcab352cc\") " pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" Nov 1 00:23:04.647652 kubelet[2745]: I1101 00:23:04.646767 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5aad50b7-9c5b-4c75-b82d-9cd68d392290-config\") pod \"goldmane-666569f655-lrfg9\" (UID: \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\") " pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:04.647652 kubelet[2745]: I1101 00:23:04.646788 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7a5e4241-2b02-4d05-aee8-621954146083-calico-apiserver-certs\") pod \"calico-apiserver-5cd88c66c7-t86s4\" (UID: \"7a5e4241-2b02-4d05-aee8-621954146083\") " pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" Nov 1 00:23:04.648994 kubelet[2745]: I1101 00:23:04.646813 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e33febfb-cf29-450e-a371-4a2c6d265345-tigera-ca-bundle\") pod \"calico-kube-controllers-85dfcd4bbd-qbgm9\" (UID: \"e33febfb-cf29-450e-a371-4a2c6d265345\") " pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" Nov 1 00:23:04.648994 kubelet[2745]: I1101 00:23:04.646833 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5aad50b7-9c5b-4c75-b82d-9cd68d392290-goldmane-key-pair\") pod \"goldmane-666569f655-lrfg9\" (UID: \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\") " pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:04.648994 kubelet[2745]: I1101 00:23:04.646855 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-backend-key-pair\") pod \"whisker-dd6f966-7pmfv\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " pod="calico-system/whisker-dd6f966-7pmfv" Nov 1 00:23:04.648994 kubelet[2745]: I1101 00:23:04.646891 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-kube-api-access-62b7c\") pod \"whisker-dd6f966-7pmfv\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " pod="calico-system/whisker-dd6f966-7pmfv" Nov 1 00:23:04.648994 kubelet[2745]: I1101 00:23:04.646914 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2tj8\" (UniqueName: \"kubernetes.io/projected/3349d8c7-91f7-48f7-a15a-d52d578f2952-kube-api-access-w2tj8\") pod \"coredns-668d6bf9bc-n2bnk\" (UID: \"3349d8c7-91f7-48f7-a15a-d52d578f2952\") " pod="kube-system/coredns-668d6bf9bc-n2bnk" Nov 1 00:23:04.649642 kubelet[2745]: I1101 00:23:04.646936 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwr9l\" (UniqueName: \"kubernetes.io/projected/e33febfb-cf29-450e-a371-4a2c6d265345-kube-api-access-jwr9l\") pod \"calico-kube-controllers-85dfcd4bbd-qbgm9\" (UID: \"e33febfb-cf29-450e-a371-4a2c6d265345\") " pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" Nov 1 00:23:04.649642 kubelet[2745]: I1101 00:23:04.646961 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp8xv\" (UniqueName: \"kubernetes.io/projected/16c60fed-179e-4b9b-b5f3-3af5fa94c7e7-kube-api-access-xp8xv\") pod \"coredns-668d6bf9bc-hbtgd\" (UID: \"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7\") " pod="kube-system/coredns-668d6bf9bc-hbtgd" Nov 1 00:23:04.649642 kubelet[2745]: I1101 00:23:04.646988 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86457ed6-a969-4f17-a69a-681dcab352cc-calico-apiserver-certs\") pod \"calico-apiserver-5cd88c66c7-sqhhf\" (UID: \"86457ed6-a969-4f17-a69a-681dcab352cc\") " pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" Nov 1 00:23:04.649642 kubelet[2745]: I1101 00:23:04.647009 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgq4\" (UniqueName: \"kubernetes.io/projected/5aad50b7-9c5b-4c75-b82d-9cd68d392290-kube-api-access-bpgq4\") pod \"goldmane-666569f655-lrfg9\" (UID: \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\") " pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:04.832872 containerd[1624]: time="2025-11-01T00:23:04.832679648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbtgd,Uid:16c60fed-179e-4b9b-b5f3-3af5fa94c7e7,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:04.834235 containerd[1624]: time="2025-11-01T00:23:04.832683966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n2bnk,Uid:3349d8c7-91f7-48f7-a15a-d52d578f2952,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:04.837646 containerd[1624]: time="2025-11-01T00:23:04.837581890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-t86s4,Uid:7a5e4241-2b02-4d05-aee8-621954146083,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:04.839913 containerd[1624]: time="2025-11-01T00:23:04.839762422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dfcd4bbd-qbgm9,Uid:e33febfb-cf29-450e-a371-4a2c6d265345,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:04.845726 containerd[1624]: time="2025-11-01T00:23:04.845693877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd6f966-7pmfv,Uid:b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:04.865954 containerd[1624]: time="2025-11-01T00:23:04.865203021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-sqhhf,Uid:86457ed6-a969-4f17-a69a-681dcab352cc,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:04.869226 containerd[1624]: time="2025-11-01T00:23:04.869192613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lrfg9,Uid:5aad50b7-9c5b-4c75-b82d-9cd68d392290,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:05.002586 containerd[1624]: time="2025-11-01T00:23:05.001897962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnx62,Uid:2fb3e683-810b-4091-a4c8-6fa869de6607,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:05.198788 containerd[1624]: time="2025-11-01T00:23:05.198743011Z" level=error msg="Failed to destroy network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.213623 containerd[1624]: time="2025-11-01T00:23:05.213571595Z" level=error msg="Failed to destroy network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.214252 containerd[1624]: time="2025-11-01T00:23:05.214228974Z" level=error msg="encountered an error cleaning up failed sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.220063 containerd[1624]: time="2025-11-01T00:23:05.219964177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbtgd,Uid:16c60fed-179e-4b9b-b5f3-3af5fa94c7e7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.227031 containerd[1624]: time="2025-11-01T00:23:05.226957542Z" level=error msg="Failed to destroy network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.227988 containerd[1624]: time="2025-11-01T00:23:05.227251696Z" level=error msg="encountered an error cleaning up failed sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.227988 containerd[1624]: time="2025-11-01T00:23:05.227305007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-t86s4,Uid:7a5e4241-2b02-4d05-aee8-621954146083,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.228628 containerd[1624]: time="2025-11-01T00:23:05.228172410Z" level=error msg="encountered an error cleaning up failed sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.228628 containerd[1624]: time="2025-11-01T00:23:05.228208720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n2bnk,Uid:3349d8c7-91f7-48f7-a15a-d52d578f2952,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.228628 containerd[1624]: time="2025-11-01T00:23:05.228333299Z" level=error msg="Failed to destroy network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.229309 containerd[1624]: time="2025-11-01T00:23:05.229283860Z" level=error msg="encountered an error cleaning up failed sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.229542 containerd[1624]: time="2025-11-01T00:23:05.229406445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lrfg9,Uid:5aad50b7-9c5b-4c75-b82d-9cd68d392290,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.229542 containerd[1624]: time="2025-11-01T00:23:05.229499033Z" level=error msg="Failed to destroy network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.230753 containerd[1624]: time="2025-11-01T00:23:05.230725955Z" level=error msg="encountered an error cleaning up failed sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.230834 containerd[1624]: time="2025-11-01T00:23:05.230817741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dd6f966-7pmfv,Uid:b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.231936 kubelet[2745]: E1101 00:23:05.231394 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.231936 kubelet[2745]: E1101 00:23:05.231458 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dd6f966-7pmfv" Nov 1 00:23:05.231936 kubelet[2745]: E1101 00:23:05.231478 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dd6f966-7pmfv" Nov 1 00:23:05.232407 containerd[1624]: time="2025-11-01T00:23:05.231809873Z" level=error msg="Failed to destroy network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.232435 kubelet[2745]: E1101 00:23:05.231513 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dd6f966-7pmfv_calico-system(b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dd6f966-7pmfv_calico-system(b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dd6f966-7pmfv" podUID="b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" Nov 1 00:23:05.232435 kubelet[2745]: E1101 00:23:05.232247 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.233546 kubelet[2745]: E1101 00:23:05.232532 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n2bnk" Nov 1 00:23:05.233546 kubelet[2745]: E1101 00:23:05.232561 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n2bnk" Nov 1 00:23:05.233546 kubelet[2745]: E1101 00:23:05.232596 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n2bnk_kube-system(3349d8c7-91f7-48f7-a15a-d52d578f2952)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n2bnk_kube-system(3349d8c7-91f7-48f7-a15a-d52d578f2952)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n2bnk" podUID="3349d8c7-91f7-48f7-a15a-d52d578f2952" Nov 1 00:23:05.233689 kubelet[2745]: E1101 00:23:05.232635 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.233689 kubelet[2745]: E1101 00:23:05.232652 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hbtgd" Nov 1 00:23:05.233689 kubelet[2745]: E1101 00:23:05.232665 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hbtgd" Nov 1 00:23:05.233788 kubelet[2745]: E1101 00:23:05.232685 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hbtgd_kube-system(16c60fed-179e-4b9b-b5f3-3af5fa94c7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hbtgd_kube-system(16c60fed-179e-4b9b-b5f3-3af5fa94c7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hbtgd" podUID="16c60fed-179e-4b9b-b5f3-3af5fa94c7e7" Nov 1 00:23:05.233788 kubelet[2745]: E1101 00:23:05.232716 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.233788 kubelet[2745]: E1101 00:23:05.232730 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" Nov 1 00:23:05.233868 kubelet[2745]: E1101 00:23:05.232740 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" Nov 1 00:23:05.233868 kubelet[2745]: E1101 00:23:05.232756 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:05.233868 kubelet[2745]: E1101 00:23:05.232787 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.233969 kubelet[2745]: E1101 00:23:05.232799 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:05.233969 kubelet[2745]: E1101 00:23:05.232809 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lrfg9" Nov 1 00:23:05.233969 kubelet[2745]: E1101 00:23:05.232837 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:05.235661 containerd[1624]: time="2025-11-01T00:23:05.235635055Z" level=error msg="encountered an error cleaning up failed sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.235794 containerd[1624]: time="2025-11-01T00:23:05.235758671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dfcd4bbd-qbgm9,Uid:e33febfb-cf29-450e-a371-4a2c6d265345,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.236173 kubelet[2745]: E1101 00:23:05.236001 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.236173 kubelet[2745]: E1101 00:23:05.236061 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" Nov 1 00:23:05.236173 kubelet[2745]: E1101 00:23:05.236081 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" Nov 1 00:23:05.236304 kubelet[2745]: E1101 00:23:05.236107 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:05.236843 containerd[1624]: time="2025-11-01T00:23:05.236816680Z" level=error msg="Failed to destroy network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.237371 containerd[1624]: time="2025-11-01T00:23:05.237337107Z" level=error msg="encountered an error cleaning up failed sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.238699 containerd[1624]: time="2025-11-01T00:23:05.238553659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnx62,Uid:2fb3e683-810b-4091-a4c8-6fa869de6607,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.240444 kubelet[2745]: E1101 00:23:05.240288 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.240444 kubelet[2745]: E1101 00:23:05.240334 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnx62" Nov 1 00:23:05.240444 kubelet[2745]: E1101 00:23:05.240360 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jnx62" Nov 1 00:23:05.240573 kubelet[2745]: E1101 00:23:05.240413 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:05.240872 containerd[1624]: time="2025-11-01T00:23:05.240855000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:05.248652 containerd[1624]: time="2025-11-01T00:23:05.248596750Z" level=error msg="Failed to destroy network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.249049 containerd[1624]: time="2025-11-01T00:23:05.249023728Z" level=error msg="encountered an error cleaning up failed sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.249175 containerd[1624]: time="2025-11-01T00:23:05.249120504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-sqhhf,Uid:86457ed6-a969-4f17-a69a-681dcab352cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.249459 kubelet[2745]: E1101 00:23:05.249437 2745 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:05.249617 kubelet[2745]: E1101 00:23:05.249532 2745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" Nov 1 00:23:05.249617 kubelet[2745]: E1101 00:23:05.249552 2745 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" Nov 1 00:23:05.249617 kubelet[2745]: E1101 00:23:05.249590 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:05.815740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3-shm.mount: Deactivated successfully. Nov 1 00:23:05.815876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030-shm.mount: Deactivated successfully. Nov 1 00:23:05.815972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878-shm.mount: Deactivated successfully. Nov 1 00:23:06.236733 kubelet[2745]: I1101 00:23:06.236691 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:06.242030 kubelet[2745]: I1101 00:23:06.241108 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:06.250443 containerd[1624]: time="2025-11-01T00:23:06.250171118Z" level=info msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" Nov 1 00:23:06.253025 containerd[1624]: time="2025-11-01T00:23:06.251727290Z" level=info msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" Nov 1 00:23:06.258345 containerd[1624]: time="2025-11-01T00:23:06.258318517Z" level=info msg="Ensure that sandbox a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7 in task-service has been cleanup successfully" Nov 1 00:23:06.258726 kubelet[2745]: I1101 00:23:06.258684 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:06.260983 containerd[1624]: time="2025-11-01T00:23:06.259586717Z" level=info msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" Nov 1 00:23:06.260983 containerd[1624]: time="2025-11-01T00:23:06.259718871Z" level=info msg="Ensure that sandbox 61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030 in task-service has been cleanup successfully" Nov 1 00:23:06.267161 containerd[1624]: time="2025-11-01T00:23:06.258373813Z" level=info msg="Ensure that sandbox 6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e in task-service has been cleanup successfully" Nov 1 00:23:06.288368 kubelet[2745]: I1101 00:23:06.288324 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:06.290716 containerd[1624]: time="2025-11-01T00:23:06.290509280Z" level=info msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" Nov 1 00:23:06.293308 containerd[1624]: time="2025-11-01T00:23:06.293017927Z" level=info msg="Ensure that sandbox 60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb in task-service has been cleanup successfully" Nov 1 00:23:06.306041 kubelet[2745]: I1101 00:23:06.305957 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:06.309764 containerd[1624]: time="2025-11-01T00:23:06.309717409Z" level=info msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" Nov 1 00:23:06.311998 containerd[1624]: time="2025-11-01T00:23:06.311970276Z" level=info msg="Ensure that sandbox 69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24 in task-service has been cleanup successfully" Nov 1 00:23:06.317263 kubelet[2745]: I1101 00:23:06.317244 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:06.318742 containerd[1624]: time="2025-11-01T00:23:06.318696873Z" level=info msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" Nov 1 00:23:06.318968 containerd[1624]: time="2025-11-01T00:23:06.318939208Z" level=info msg="Ensure that sandbox e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878 in task-service has been cleanup successfully" Nov 1 00:23:06.326162 kubelet[2745]: I1101 00:23:06.326139 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:06.327860 containerd[1624]: time="2025-11-01T00:23:06.327216005Z" level=info msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" Nov 1 00:23:06.327860 containerd[1624]: time="2025-11-01T00:23:06.327381722Z" level=info msg="Ensure that sandbox 8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497 in task-service has been cleanup successfully" Nov 1 00:23:06.331187 kubelet[2745]: I1101 00:23:06.331142 2745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:06.334323 containerd[1624]: time="2025-11-01T00:23:06.334276140Z" level=info msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" Nov 1 00:23:06.334899 containerd[1624]: time="2025-11-01T00:23:06.334873616Z" level=info msg="Ensure that sandbox 7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3 in task-service has been cleanup successfully" Nov 1 00:23:06.408645 containerd[1624]: time="2025-11-01T00:23:06.408391604Z" level=error msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" failed" error="failed to destroy network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.408857 kubelet[2745]: E1101 00:23:06.408755 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:06.408938 kubelet[2745]: E1101 00:23:06.408854 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7"} Nov 1 00:23:06.408968 kubelet[2745]: E1101 00:23:06.408936 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86457ed6-a969-4f17-a69a-681dcab352cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.409047 kubelet[2745]: E1101 00:23:06.409005 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86457ed6-a969-4f17-a69a-681dcab352cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:06.412293 containerd[1624]: time="2025-11-01T00:23:06.412250888Z" level=error msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" failed" error="failed to destroy network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.413207 kubelet[2745]: E1101 00:23:06.413021 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:06.413207 kubelet[2745]: E1101 00:23:06.413084 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e"} Nov 1 00:23:06.413207 kubelet[2745]: E1101 00:23:06.413146 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a5e4241-2b02-4d05-aee8-621954146083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.413207 kubelet[2745]: E1101 00:23:06.413171 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a5e4241-2b02-4d05-aee8-621954146083\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:06.413363 containerd[1624]: time="2025-11-01T00:23:06.413111828Z" level=error msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" failed" error="failed to destroy network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.413595 kubelet[2745]: E1101 00:23:06.413396 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:06.413595 kubelet[2745]: E1101 00:23:06.413488 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb"} Nov 1 00:23:06.413595 kubelet[2745]: E1101 00:23:06.413526 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.413595 kubelet[2745]: E1101 00:23:06.413557 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5aad50b7-9c5b-4c75-b82d-9cd68d392290\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:06.419553 containerd[1624]: time="2025-11-01T00:23:06.419114799Z" level=error msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" failed" error="failed to destroy network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.419717 kubelet[2745]: E1101 00:23:06.419654 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:06.419763 kubelet[2745]: E1101 00:23:06.419723 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878"} Nov 1 00:23:06.419811 kubelet[2745]: E1101 00:23:06.419772 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.419964 kubelet[2745]: E1101 00:23:06.419803 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hbtgd" podUID="16c60fed-179e-4b9b-b5f3-3af5fa94c7e7" Nov 1 00:23:06.424897 containerd[1624]: time="2025-11-01T00:23:06.424833094Z" level=error msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" failed" error="failed to destroy network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.425418 kubelet[2745]: E1101 00:23:06.425241 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:06.425474 kubelet[2745]: E1101 00:23:06.425433 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24"} Nov 1 00:23:06.425511 kubelet[2745]: E1101 00:23:06.425474 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.425560 kubelet[2745]: E1101 00:23:06.425525 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dd6f966-7pmfv" podUID="b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" Nov 1 00:23:06.432797 containerd[1624]: time="2025-11-01T00:23:06.432764680Z" level=error msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" failed" error="failed to destroy network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.433284 kubelet[2745]: E1101 00:23:06.433231 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:06.433337 kubelet[2745]: E1101 00:23:06.433308 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030"} Nov 1 00:23:06.433367 kubelet[2745]: E1101 00:23:06.433346 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3349d8c7-91f7-48f7-a15a-d52d578f2952\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.433434 kubelet[2745]: E1101 00:23:06.433370 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3349d8c7-91f7-48f7-a15a-d52d578f2952\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n2bnk" podUID="3349d8c7-91f7-48f7-a15a-d52d578f2952" Nov 1 00:23:06.442159 containerd[1624]: time="2025-11-01T00:23:06.441843102Z" level=error msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" failed" error="failed to destroy network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.443215 kubelet[2745]: E1101 00:23:06.442143 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:06.443215 kubelet[2745]: E1101 00:23:06.442185 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3"} Nov 1 00:23:06.443215 kubelet[2745]: E1101 00:23:06.442213 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e33febfb-cf29-450e-a371-4a2c6d265345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.443215 kubelet[2745]: E1101 00:23:06.442239 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e33febfb-cf29-450e-a371-4a2c6d265345\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:06.445200 containerd[1624]: time="2025-11-01T00:23:06.445159156Z" level=error msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" failed" error="failed to destroy network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:06.445463 kubelet[2745]: E1101 00:23:06.445427 2745 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:06.445510 kubelet[2745]: E1101 00:23:06.445467 2745 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497"} Nov 1 00:23:06.445510 kubelet[2745]: E1101 00:23:06.445492 2745 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fb3e683-810b-4091-a4c8-6fa869de6607\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:06.445573 kubelet[2745]: E1101 00:23:06.445515 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fb3e683-810b-4091-a4c8-6fa869de6607\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:09.361354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723478759.mount: Deactivated successfully. Nov 1 00:23:09.447931 containerd[1624]: time="2025-11-01T00:23:09.446695598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:23:09.464202 containerd[1624]: time="2025-11-01T00:23:09.464151859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.487202 containerd[1624]: time="2025-11-01T00:23:09.487104426Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.488563 containerd[1624]: time="2025-11-01T00:23:09.488519002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:09.493340 containerd[1624]: time="2025-11-01T00:23:09.493271697Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.247109692s" Nov 1 00:23:09.493340 containerd[1624]: time="2025-11-01T00:23:09.493333875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:23:09.570760 containerd[1624]: time="2025-11-01T00:23:09.570588523Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:09.621837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863717921.mount: Deactivated successfully. Nov 1 00:23:09.637513 containerd[1624]: time="2025-11-01T00:23:09.637436129Z" level=info msg="CreateContainer within sandbox \"045773a9c9dc0f26eccef8d8c11019a10c26a138124fccb56fe2d78a4788d70f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c\"" Nov 1 00:23:09.640340 containerd[1624]: time="2025-11-01T00:23:09.640017257Z" level=info msg="StartContainer for \"6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c\"" Nov 1 00:23:09.903002 containerd[1624]: time="2025-11-01T00:23:09.902843655Z" level=info msg="StartContainer for \"6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c\" returns successfully" Nov 1 00:23:09.998530 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:10.002456 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:10.272938 containerd[1624]: time="2025-11-01T00:23:10.272858540Z" level=info msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" Nov 1 00:23:10.619056 systemd[1]: run-containerd-runc-k8s.io-6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c-runc.oGOdab.mount: Deactivated successfully. Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.460 [INFO][3960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.469 [INFO][3960] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" iface="eth0" netns="/var/run/netns/cni-422ff3e3-4718-0f8b-1c04-4225e3c03513" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.469 [INFO][3960] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" iface="eth0" netns="/var/run/netns/cni-422ff3e3-4718-0f8b-1c04-4225e3c03513" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.472 [INFO][3960] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" iface="eth0" netns="/var/run/netns/cni-422ff3e3-4718-0f8b-1c04-4225e3c03513" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.472 [INFO][3960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.472 [INFO][3960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.752 [INFO][3968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.756 [INFO][3968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.757 [INFO][3968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.767 [WARNING][3968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.767 [INFO][3968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.769 [INFO][3968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:10.773012 containerd[1624]: 2025-11-01 00:23:10.771 [INFO][3960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:10.774557 containerd[1624]: time="2025-11-01T00:23:10.773117025Z" level=info msg="TearDown network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" successfully" Nov 1 00:23:10.774557 containerd[1624]: time="2025-11-01T00:23:10.773164426Z" level=info msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" returns successfully" Nov 1 00:23:10.778667 systemd[1]: run-netns-cni\x2d422ff3e3\x2d4718\x2d0f8b\x2d1c04\x2d4225e3c03513.mount: Deactivated successfully. Nov 1 00:23:10.910240 kubelet[2745]: I1101 00:23:10.910070 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-backend-key-pair\") pod \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " Nov 1 00:23:10.910240 kubelet[2745]: I1101 00:23:10.910191 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-ca-bundle\") pod \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " Nov 1 00:23:10.911372 kubelet[2745]: I1101 00:23:10.910257 2745 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-kube-api-access-62b7c\") pod \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\" (UID: \"b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6\") " Nov 1 00:23:10.926542 kubelet[2745]: I1101 00:23:10.920626 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" (UID: "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:10.926542 kubelet[2745]: I1101 00:23:10.926216 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-kube-api-access-62b7c" (OuterVolumeSpecName: "kube-api-access-62b7c") pod "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" (UID: "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6"). InnerVolumeSpecName "kube-api-access-62b7c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:10.926542 kubelet[2745]: I1101 00:23:10.920616 2745 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" (UID: "b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:10.926672 systemd[1]: var-lib-kubelet-pods-b3c584e8\x2de98e\x2d4a3a\x2daeff\x2d33e52fe7b2a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d62b7c.mount: Deactivated successfully. Nov 1 00:23:10.926818 systemd[1]: var-lib-kubelet-pods-b3c584e8\x2de98e\x2d4a3a\x2daeff\x2d33e52fe7b2a6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:11.010698 kubelet[2745]: I1101 00:23:11.010636 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-b21903d23a\" DevicePath \"\"" Nov 1 00:23:11.010698 kubelet[2745]: I1101 00:23:11.010697 2745 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-whisker-ca-bundle\") on node \"ci-4081-3-6-n-b21903d23a\" DevicePath \"\"" Nov 1 00:23:11.010897 kubelet[2745]: I1101 00:23:11.010724 2745 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-62b7c\" (UniqueName: \"kubernetes.io/projected/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6-kube-api-access-62b7c\") on node \"ci-4081-3-6-n-b21903d23a\" DevicePath \"\"" Nov 1 00:23:11.409463 systemd-journald[1174]: Under memory pressure, flushing caches. Nov 1 00:23:11.408164 systemd-resolved[1511]: Under memory pressure, flushing caches. Nov 1 00:23:11.408236 systemd-resolved[1511]: Flushed all caches. Nov 1 00:23:11.422208 kubelet[2745]: I1101 00:23:11.418064 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m2jz8" podStartSLOduration=2.897777444 podStartE2EDuration="15.405782387s" podCreationTimestamp="2025-11-01 00:22:56 +0000 UTC" firstStartedPulling="2025-11-01 00:22:56.98620618 +0000 UTC m=+22.124898665" lastFinishedPulling="2025-11-01 00:23:09.494211124 +0000 UTC m=+34.632903608" observedRunningTime="2025-11-01 00:23:10.472595038 +0000 UTC m=+35.611287554" watchObservedRunningTime="2025-11-01 00:23:11.405782387 +0000 UTC m=+36.544474872" Nov 1 00:23:11.615564 kubelet[2745]: I1101 00:23:11.615474 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ceed905b-f8f5-47a1-9eef-2e450e657cf3-whisker-backend-key-pair\") pod \"whisker-5bd87784b4-tjjnp\" (UID: \"ceed905b-f8f5-47a1-9eef-2e450e657cf3\") " pod="calico-system/whisker-5bd87784b4-tjjnp" Nov 1 00:23:11.615564 kubelet[2745]: I1101 00:23:11.615553 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg6sd\" (UniqueName: \"kubernetes.io/projected/ceed905b-f8f5-47a1-9eef-2e450e657cf3-kube-api-access-kg6sd\") pod \"whisker-5bd87784b4-tjjnp\" (UID: \"ceed905b-f8f5-47a1-9eef-2e450e657cf3\") " pod="calico-system/whisker-5bd87784b4-tjjnp" Nov 1 00:23:11.615789 kubelet[2745]: I1101 00:23:11.615597 2745 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceed905b-f8f5-47a1-9eef-2e450e657cf3-whisker-ca-bundle\") pod \"whisker-5bd87784b4-tjjnp\" (UID: \"ceed905b-f8f5-47a1-9eef-2e450e657cf3\") " pod="calico-system/whisker-5bd87784b4-tjjnp" Nov 1 00:23:11.794763 containerd[1624]: time="2025-11-01T00:23:11.794726023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd87784b4-tjjnp,Uid:ceed905b-f8f5-47a1-9eef-2e450e657cf3,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:12.071806 systemd-networkd[1251]: calibf856b9e060: Link UP Nov 1 00:23:12.072565 systemd-networkd[1251]: calibf856b9e060: Gained carrier Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.893 [INFO][4118] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.928 [INFO][4118] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0 whisker-5bd87784b4- calico-system ceed905b-f8f5-47a1-9eef-2e450e657cf3 866 0 2025-11-01 00:23:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bd87784b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a whisker-5bd87784b4-tjjnp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibf856b9e060 [] [] }} ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.929 [INFO][4118] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.961 [INFO][4130] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" HandleID="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.962 [INFO][4130] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" HandleID="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5540), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"whisker-5bd87784b4-tjjnp", "timestamp":"2025-11-01 00:23:11.961317853 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.962 [INFO][4130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.962 [INFO][4130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.962 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:11.985 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.000 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.010 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.014 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.020 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.023 [INFO][4130] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.027 [INFO][4130] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.033 [INFO][4130] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.046 [INFO][4130] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.193/26] block=192.168.113.192/26 handle="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.046 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.193/26] handle="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.046 [INFO][4130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:12.085304 containerd[1624]: 2025-11-01 00:23:12.046 [INFO][4130] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.193/26] IPv6=[] ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" HandleID="k8s-pod-network.bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.051 [INFO][4118] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0", GenerateName:"whisker-5bd87784b4-", Namespace:"calico-system", SelfLink:"", UID:"ceed905b-f8f5-47a1-9eef-2e450e657cf3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd87784b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"whisker-5bd87784b4-tjjnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf856b9e060", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.051 [INFO][4118] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.193/32] ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.051 [INFO][4118] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf856b9e060 ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.060 [INFO][4118] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.060 [INFO][4118] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0", GenerateName:"whisker-5bd87784b4-", Namespace:"calico-system", SelfLink:"", UID:"ceed905b-f8f5-47a1-9eef-2e450e657cf3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bd87784b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a", Pod:"whisker-5bd87784b4-tjjnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibf856b9e060", MAC:"ca:51:d4:e4:23:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:12.086364 containerd[1624]: 2025-11-01 00:23:12.081 [INFO][4118] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a" Namespace="calico-system" Pod="whisker-5bd87784b4-tjjnp" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--5bd87784b4--tjjnp-eth0" Nov 1 00:23:12.145331 containerd[1624]: time="2025-11-01T00:23:12.144875161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:12.145331 containerd[1624]: time="2025-11-01T00:23:12.144936569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:12.145331 containerd[1624]: time="2025-11-01T00:23:12.144958621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:12.145331 containerd[1624]: time="2025-11-01T00:23:12.145040187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:12.223420 containerd[1624]: time="2025-11-01T00:23:12.222992671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bd87784b4-tjjnp,Uid:ceed905b-f8f5-47a1-9eef-2e450e657cf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf730ed4e5c72004a7938c7a3ce5a61432106639b46d4efbb8a39496ce98789a\"" Nov 1 00:23:12.226167 containerd[1624]: time="2025-11-01T00:23:12.225387849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:12.653448 containerd[1624]: time="2025-11-01T00:23:12.653355538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:12.661669 containerd[1624]: time="2025-11-01T00:23:12.654836748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:12.661792 containerd[1624]: time="2025-11-01T00:23:12.654967157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:12.662020 kubelet[2745]: E1101 00:23:12.661976 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:12.662762 kubelet[2745]: E1101 00:23:12.662731 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:12.673826 kubelet[2745]: E1101 00:23:12.673750 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9a93a7233c9461ab5447c8e9d685214,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:12.676374 containerd[1624]: time="2025-11-01T00:23:12.676334328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:12.998776 kubelet[2745]: I1101 00:23:12.998278 2745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6" path="/var/lib/kubelet/pods/b3c584e8-e98e-4a3a-aeff-33e52fe7b2a6/volumes" Nov 1 00:23:13.118875 containerd[1624]: time="2025-11-01T00:23:13.118704724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:13.120161 containerd[1624]: time="2025-11-01T00:23:13.120100941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:13.120656 containerd[1624]: time="2025-11-01T00:23:13.120219828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:13.120730 kubelet[2745]: E1101 00:23:13.120353 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:13.120730 kubelet[2745]: E1101 00:23:13.120404 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:13.120807 kubelet[2745]: E1101 00:23:13.120512 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:13.122081 kubelet[2745]: E1101 00:23:13.122031 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:23:13.393753 kubelet[2745]: E1101 00:23:13.393550 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:23:13.454501 systemd-resolved[1511]: Under memory pressure, flushing caches. Nov 1 00:23:13.454524 systemd-resolved[1511]: Flushed all caches. Nov 1 00:23:13.456150 systemd-journald[1174]: Under memory pressure, flushing caches. Nov 1 00:23:13.838321 systemd-networkd[1251]: calibf856b9e060: Gained IPv6LL Nov 1 00:23:17.992827 containerd[1624]: time="2025-11-01T00:23:17.992671748Z" level=info msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" Nov 1 00:23:17.994146 containerd[1624]: time="2025-11-01T00:23:17.993607644Z" level=info msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" Nov 1 00:23:17.995624 containerd[1624]: time="2025-11-01T00:23:17.995579007Z" level=info msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.068 [INFO][4353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.069 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" iface="eth0" netns="/var/run/netns/cni-277d028f-578c-1469-1f07-5e06240f302a" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.069 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" iface="eth0" netns="/var/run/netns/cni-277d028f-578c-1469-1f07-5e06240f302a" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.070 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" iface="eth0" netns="/var/run/netns/cni-277d028f-578c-1469-1f07-5e06240f302a" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.070 [INFO][4353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.070 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.125 [INFO][4371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.127 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.127 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.136 [WARNING][4371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.136 [INFO][4371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.140 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.154330 containerd[1624]: 2025-11-01 00:23:18.143 [INFO][4353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:18.158736 containerd[1624]: time="2025-11-01T00:23:18.154584891Z" level=info msg="TearDown network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" successfully" Nov 1 00:23:18.158736 containerd[1624]: time="2025-11-01T00:23:18.154626330Z" level=info msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" returns successfully" Nov 1 00:23:18.158736 containerd[1624]: time="2025-11-01T00:23:18.158402816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-t86s4,Uid:7a5e4241-2b02-4d05-aee8-621954146083,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:18.163481 systemd[1]: run-netns-cni\x2d277d028f\x2d578c\x2d1469\x2d1f07\x2d5e06240f302a.mount: Deactivated successfully. Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.091 [INFO][4355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.095 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" iface="eth0" netns="/var/run/netns/cni-7ccae77b-3293-5923-7228-68b2c826b3ff" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.097 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" iface="eth0" netns="/var/run/netns/cni-7ccae77b-3293-5923-7228-68b2c826b3ff" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.098 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" iface="eth0" netns="/var/run/netns/cni-7ccae77b-3293-5923-7228-68b2c826b3ff" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.098 [INFO][4355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.098 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.149 [INFO][4377] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.149 [INFO][4377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.149 [INFO][4377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.159 [WARNING][4377] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.162 [INFO][4377] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.169 [INFO][4377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.180883 containerd[1624]: 2025-11-01 00:23:18.171 [INFO][4355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:18.186014 containerd[1624]: time="2025-11-01T00:23:18.181067137Z" level=info msg="TearDown network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" successfully" Nov 1 00:23:18.186014 containerd[1624]: time="2025-11-01T00:23:18.181098497Z" level=info msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" returns successfully" Nov 1 00:23:18.186014 containerd[1624]: time="2025-11-01T00:23:18.182046055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbtgd,Uid:16c60fed-179e-4b9b-b5f3-3af5fa94c7e7,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:18.185614 systemd[1]: run-netns-cni\x2d7ccae77b\x2d3293\x2d5923\x2d7228\x2d68b2c826b3ff.mount: Deactivated successfully. Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.102 [INFO][4346] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.103 [INFO][4346] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" iface="eth0" netns="/var/run/netns/cni-988e01d8-4044-6ffa-aeff-ed2e9b7309bd" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.104 [INFO][4346] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" iface="eth0" netns="/var/run/netns/cni-988e01d8-4044-6ffa-aeff-ed2e9b7309bd" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.105 [INFO][4346] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" iface="eth0" netns="/var/run/netns/cni-988e01d8-4044-6ffa-aeff-ed2e9b7309bd" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.105 [INFO][4346] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.105 [INFO][4346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.170 [INFO][4380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.170 [INFO][4380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.171 [INFO][4380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.180 [WARNING][4380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.180 [INFO][4380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.186 [INFO][4380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.192246 containerd[1624]: 2025-11-01 00:23:18.189 [INFO][4346] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:18.196115 containerd[1624]: time="2025-11-01T00:23:18.192392640Z" level=info msg="TearDown network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" successfully" Nov 1 00:23:18.196115 containerd[1624]: time="2025-11-01T00:23:18.192427917Z" level=info msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" returns successfully" Nov 1 00:23:18.196115 containerd[1624]: time="2025-11-01T00:23:18.194390762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnx62,Uid:2fb3e683-810b-4091-a4c8-6fa869de6607,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:18.197416 systemd[1]: run-netns-cni\x2d988e01d8\x2d4044\x2d6ffa\x2daeff\x2ded2e9b7309bd.mount: Deactivated successfully. Nov 1 00:23:18.436473 systemd-networkd[1251]: cali8d9beeed6b4: Link UP Nov 1 00:23:18.437508 systemd-networkd[1251]: cali8d9beeed6b4: Gained carrier Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.257 [INFO][4397] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.272 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0 coredns-668d6bf9bc- kube-system 16c60fed-179e-4b9b-b5f3-3af5fa94c7e7 905 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a coredns-668d6bf9bc-hbtgd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8d9beeed6b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.272 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.343 [INFO][4424] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" HandleID="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.345 [INFO][4424] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" HandleID="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"coredns-668d6bf9bc-hbtgd", "timestamp":"2025-11-01 00:23:18.34374757 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.346 [INFO][4424] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.346 [INFO][4424] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.346 [INFO][4424] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.362 [INFO][4424] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.380 [INFO][4424] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.394 [INFO][4424] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.397 [INFO][4424] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.400 [INFO][4424] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.400 [INFO][4424] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.404 [INFO][4424] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0 Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.411 [INFO][4424] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.421 [INFO][4424] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.194/26] block=192.168.113.192/26 handle="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.421 [INFO][4424] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.194/26] handle="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.421 [INFO][4424] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.472331 containerd[1624]: 2025-11-01 00:23:18.421 [INFO][4424] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.194/26] IPv6=[] ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" HandleID="k8s-pod-network.47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.424 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"coredns-668d6bf9bc-hbtgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9beeed6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.424 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.194/32] ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.424 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d9beeed6b4 ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.440 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.444 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0", Pod:"coredns-668d6bf9bc-hbtgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9beeed6b4", MAC:"16:29:4a:56:32:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.476891 containerd[1624]: 2025-11-01 00:23:18.460 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hbtgd" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:18.538045 containerd[1624]: time="2025-11-01T00:23:18.533024116Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:18.538045 containerd[1624]: time="2025-11-01T00:23:18.537750786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:18.538045 containerd[1624]: time="2025-11-01T00:23:18.537771305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.538045 containerd[1624]: time="2025-11-01T00:23:18.537898929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.555192 systemd-networkd[1251]: calicddb88a7c00: Link UP Nov 1 00:23:18.556199 systemd-networkd[1251]: calicddb88a7c00: Gained carrier Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.299 [INFO][4393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.322 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0 calico-apiserver-5cd88c66c7- calico-apiserver 7a5e4241-2b02-4d05-aee8-621954146083 904 0 2025-11-01 00:22:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd88c66c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a calico-apiserver-5cd88c66c7-t86s4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicddb88a7c00 [] [] }} ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.322 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.429 [INFO][4433] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" HandleID="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.429 [INFO][4433] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" HandleID="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-b21903d23a", "pod":"calico-apiserver-5cd88c66c7-t86s4", "timestamp":"2025-11-01 00:23:18.429652452 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.430 [INFO][4433] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.430 [INFO][4433] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.430 [INFO][4433] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.464 [INFO][4433] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.475 [INFO][4433] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.499 [INFO][4433] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.503 [INFO][4433] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.508 [INFO][4433] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.508 [INFO][4433] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.511 [INFO][4433] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946 Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.519 [INFO][4433] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.528 [INFO][4433] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.195/26] block=192.168.113.192/26 handle="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.529 [INFO][4433] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.195/26] handle="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.530 [INFO][4433] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.594605 containerd[1624]: 2025-11-01 00:23:18.531 [INFO][4433] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.195/26] IPv6=[] ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" HandleID="k8s-pod-network.297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.551 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a5e4241-2b02-4d05-aee8-621954146083", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"calico-apiserver-5cd88c66c7-t86s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicddb88a7c00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.551 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.195/32] ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.551 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicddb88a7c00 ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.558 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.559 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a5e4241-2b02-4d05-aee8-621954146083", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946", Pod:"calico-apiserver-5cd88c66c7-t86s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicddb88a7c00", MAC:"e2:cc:86:3b:35:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.598394 containerd[1624]: 2025-11-01 00:23:18.580 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-t86s4" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:18.660418 systemd-networkd[1251]: cali3cfa6e116e6: Link UP Nov 1 00:23:18.661273 systemd-networkd[1251]: cali3cfa6e116e6: Gained carrier Nov 1 00:23:18.678844 containerd[1624]: time="2025-11-01T00:23:18.675383399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:18.678844 containerd[1624]: time="2025-11-01T00:23:18.678561204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:18.678844 containerd[1624]: time="2025-11-01T00:23:18.678575632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.680673 containerd[1624]: time="2025-11-01T00:23:18.680590917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.314 [INFO][4412] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.331 [INFO][4412] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0 csi-node-driver- calico-system 2fb3e683-810b-4091-a4c8-6fa869de6607 906 0 2025-11-01 00:22:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a csi-node-driver-jnx62 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3cfa6e116e6 [] [] }} ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.331 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.462 [INFO][4437] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" HandleID="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.466 [INFO][4437] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" HandleID="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e2c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"csi-node-driver-jnx62", "timestamp":"2025-11-01 00:23:18.462939132 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.467 [INFO][4437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.531 [INFO][4437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.532 [INFO][4437] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.572 [INFO][4437] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.601 [INFO][4437] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.615 [INFO][4437] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.622 [INFO][4437] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.626 [INFO][4437] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.627 [INFO][4437] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.630 [INFO][4437] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.636 [INFO][4437] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.648 [INFO][4437] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.196/26] block=192.168.113.192/26 handle="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.648 [INFO][4437] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.196/26] handle="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.648 [INFO][4437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.691176 containerd[1624]: 2025-11-01 00:23:18.648 [INFO][4437] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.196/26] IPv6=[] ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" HandleID="k8s-pod-network.0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.654 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fb3e683-810b-4091-a4c8-6fa869de6607", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"csi-node-driver-jnx62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cfa6e116e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.654 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.196/32] ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.654 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cfa6e116e6 ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.658 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.658 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fb3e683-810b-4091-a4c8-6fa869de6607", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d", Pod:"csi-node-driver-jnx62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cfa6e116e6", MAC:"06:15:67:e2:4b:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:18.693664 containerd[1624]: 2025-11-01 00:23:18.677 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d" Namespace="calico-system" Pod="csi-node-driver-jnx62" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:18.721776 containerd[1624]: time="2025-11-01T00:23:18.721696439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hbtgd,Uid:16c60fed-179e-4b9b-b5f3-3af5fa94c7e7,Namespace:kube-system,Attempt:1,} returns sandbox id \"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0\"" Nov 1 00:23:18.727938 containerd[1624]: time="2025-11-01T00:23:18.727865230Z" level=info msg="CreateContainer within sandbox \"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:18.751883 containerd[1624]: time="2025-11-01T00:23:18.751581168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:18.751883 containerd[1624]: time="2025-11-01T00:23:18.751637976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:18.751883 containerd[1624]: time="2025-11-01T00:23:18.751650921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.751883 containerd[1624]: time="2025-11-01T00:23:18.751719131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:18.757143 containerd[1624]: time="2025-11-01T00:23:18.756578735Z" level=info msg="CreateContainer within sandbox \"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"263197027bf0d32ec50459184eb1a56f3770e89267a46f739b4e88d9843ce105\"" Nov 1 00:23:18.759228 containerd[1624]: time="2025-11-01T00:23:18.759209335Z" level=info msg="StartContainer for \"263197027bf0d32ec50459184eb1a56f3770e89267a46f739b4e88d9843ce105\"" Nov 1 00:23:18.778577 containerd[1624]: time="2025-11-01T00:23:18.778534293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-t86s4,Uid:7a5e4241-2b02-4d05-aee8-621954146083,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946\"" Nov 1 00:23:18.781268 containerd[1624]: time="2025-11-01T00:23:18.780662654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:18.817832 containerd[1624]: time="2025-11-01T00:23:18.817713118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jnx62,Uid:2fb3e683-810b-4091-a4c8-6fa869de6607,Namespace:calico-system,Attempt:1,} returns sandbox id \"0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d\"" Nov 1 00:23:18.828376 containerd[1624]: time="2025-11-01T00:23:18.828246889Z" level=info msg="StartContainer for \"263197027bf0d32ec50459184eb1a56f3770e89267a46f739b4e88d9843ce105\" returns successfully" Nov 1 00:23:18.994262 containerd[1624]: time="2025-11-01T00:23:18.993788522Z" level=info msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" Nov 1 00:23:18.996060 containerd[1624]: time="2025-11-01T00:23:18.994273207Z" level=info msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.074 [INFO][4656] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.074 [INFO][4656] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" iface="eth0" netns="/var/run/netns/cni-480c4f6c-d961-c13b-20b9-c3f70d517ea8" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.076 [INFO][4656] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" iface="eth0" netns="/var/run/netns/cni-480c4f6c-d961-c13b-20b9-c3f70d517ea8" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.076 [INFO][4656] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" iface="eth0" netns="/var/run/netns/cni-480c4f6c-d961-c13b-20b9-c3f70d517ea8" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.076 [INFO][4656] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.076 [INFO][4656] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.103 [INFO][4675] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.103 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.103 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.109 [WARNING][4675] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.109 [INFO][4675] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.110 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:19.115250 containerd[1624]: 2025-11-01 00:23:19.112 [INFO][4656] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:19.115702 containerd[1624]: time="2025-11-01T00:23:19.115362496Z" level=info msg="TearDown network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" successfully" Nov 1 00:23:19.115702 containerd[1624]: time="2025-11-01T00:23:19.115386582Z" level=info msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" returns successfully" Nov 1 00:23:19.115998 containerd[1624]: time="2025-11-01T00:23:19.115973151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lrfg9,Uid:5aad50b7-9c5b-4c75-b82d-9cd68d392290,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.091 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.091 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" iface="eth0" netns="/var/run/netns/cni-df9b80e7-95fd-43ee-0568-11e7c7833ccd" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.092 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" iface="eth0" netns="/var/run/netns/cni-df9b80e7-95fd-43ee-0568-11e7c7833ccd" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.092 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" iface="eth0" netns="/var/run/netns/cni-df9b80e7-95fd-43ee-0568-11e7c7833ccd" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.092 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.092 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.116 [INFO][4680] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.117 [INFO][4680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.117 [INFO][4680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.122 [WARNING][4680] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.122 [INFO][4680] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.124 [INFO][4680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:19.131489 containerd[1624]: 2025-11-01 00:23:19.128 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:19.133268 containerd[1624]: time="2025-11-01T00:23:19.132025887Z" level=info msg="TearDown network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" successfully" Nov 1 00:23:19.133268 containerd[1624]: time="2025-11-01T00:23:19.132074180Z" level=info msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" returns successfully" Nov 1 00:23:19.133988 containerd[1624]: time="2025-11-01T00:23:19.133576096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dfcd4bbd-qbgm9,Uid:e33febfb-cf29-450e-a371-4a2c6d265345,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:19.187940 systemd[1]: run-netns-cni\x2d480c4f6c\x2dd961\x2dc13b\x2d20b9\x2dc3f70d517ea8.mount: Deactivated successfully. Nov 1 00:23:19.190341 systemd[1]: run-netns-cni\x2ddf9b80e7\x2d95fd\x2d43ee\x2d0568\x2d11e7c7833ccd.mount: Deactivated successfully. Nov 1 00:23:19.223994 containerd[1624]: time="2025-11-01T00:23:19.223645859Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:19.224758 containerd[1624]: time="2025-11-01T00:23:19.224614046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:19.224758 containerd[1624]: time="2025-11-01T00:23:19.224705762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:19.225029 kubelet[2745]: E1101 00:23:19.224908 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:19.225029 kubelet[2745]: E1101 00:23:19.224977 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:19.225386 kubelet[2745]: E1101 00:23:19.225269 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfdz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:19.226927 kubelet[2745]: E1101 00:23:19.226634 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:19.227001 containerd[1624]: time="2025-11-01T00:23:19.226698513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:19.289512 systemd-networkd[1251]: cali91f9ac355c4: Link UP Nov 1 00:23:19.292616 systemd-networkd[1251]: cali91f9ac355c4: Gained carrier Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.171 [INFO][4698] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.189 [INFO][4698] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0 calico-kube-controllers-85dfcd4bbd- calico-system e33febfb-cf29-450e-a371-4a2c6d265345 925 0 2025-11-01 00:22:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85dfcd4bbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a calico-kube-controllers-85dfcd4bbd-qbgm9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali91f9ac355c4 [] [] }} ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.189 [INFO][4698] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.235 [INFO][4710] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" HandleID="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.236 [INFO][4710] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" HandleID="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"calico-kube-controllers-85dfcd4bbd-qbgm9", "timestamp":"2025-11-01 00:23:19.23595837 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.236 [INFO][4710] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.236 [INFO][4710] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.236 [INFO][4710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.243 [INFO][4710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.249 [INFO][4710] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.254 [INFO][4710] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.256 [INFO][4710] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.259 [INFO][4710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.259 [INFO][4710] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.262 [INFO][4710] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5 Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.270 [INFO][4710] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4710] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.197/26] block=192.168.113.192/26 handle="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.197/26] handle="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4710] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:19.304489 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4710] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.197/26] IPv6=[] ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" HandleID="k8s-pod-network.bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.280 [INFO][4698] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0", GenerateName:"calico-kube-controllers-85dfcd4bbd-", Namespace:"calico-system", SelfLink:"", UID:"e33febfb-cf29-450e-a371-4a2c6d265345", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dfcd4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"calico-kube-controllers-85dfcd4bbd-qbgm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f9ac355c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.281 [INFO][4698] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.197/32] ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.281 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91f9ac355c4 ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.291 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.291 [INFO][4698] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0", GenerateName:"calico-kube-controllers-85dfcd4bbd-", Namespace:"calico-system", SelfLink:"", UID:"e33febfb-cf29-450e-a371-4a2c6d265345", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dfcd4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5", Pod:"calico-kube-controllers-85dfcd4bbd-qbgm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f9ac355c4", MAC:"1a:0f:71:2b:7b:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.305557 containerd[1624]: 2025-11-01 00:23:19.301 [INFO][4698] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5" Namespace="calico-system" Pod="calico-kube-controllers-85dfcd4bbd-qbgm9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:19.318944 containerd[1624]: time="2025-11-01T00:23:19.318779784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:19.320205 containerd[1624]: time="2025-11-01T00:23:19.319330034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:19.320205 containerd[1624]: time="2025-11-01T00:23:19.320189745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:19.320311 containerd[1624]: time="2025-11-01T00:23:19.320272162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:19.379044 systemd-networkd[1251]: cali20d85c45234: Link UP Nov 1 00:23:19.380391 systemd-networkd[1251]: cali20d85c45234: Gained carrier Nov 1 00:23:19.387472 containerd[1624]: time="2025-11-01T00:23:19.387420715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85dfcd4bbd-qbgm9,Uid:e33febfb-cf29-450e-a371-4a2c6d265345,Namespace:calico-system,Attempt:1,} returns sandbox id \"bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5\"" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.205 [INFO][4689] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.226 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0 goldmane-666569f655- calico-system 5aad50b7-9c5b-4c75-b82d-9cd68d392290 924 0 2025-11-01 00:22:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a goldmane-666569f655-lrfg9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali20d85c45234 [] [] }} ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.227 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.265 [INFO][4720] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" HandleID="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.266 [INFO][4720] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" HandleID="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"goldmane-666569f655-lrfg9", "timestamp":"2025-11-01 00:23:19.265797219 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.266 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.277 [INFO][4720] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.344 [INFO][4720] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.350 [INFO][4720] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.355 [INFO][4720] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.357 [INFO][4720] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.359 [INFO][4720] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.359 [INFO][4720] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.360 [INFO][4720] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.364 [INFO][4720] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.371 [INFO][4720] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.198/26] block=192.168.113.192/26 handle="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.371 [INFO][4720] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.198/26] handle="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.371 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:19.396256 containerd[1624]: 2025-11-01 00:23:19.371 [INFO][4720] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.198/26] IPv6=[] ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" HandleID="k8s-pod-network.7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.376 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5aad50b7-9c5b-4c75-b82d-9cd68d392290", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"goldmane-666569f655-lrfg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d85c45234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.376 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.198/32] ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.376 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20d85c45234 ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.381 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.381 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5aad50b7-9c5b-4c75-b82d-9cd68d392290", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef", Pod:"goldmane-666569f655-lrfg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d85c45234", MAC:"4e:05:2d:cd:f2:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.396711 containerd[1624]: 2025-11-01 00:23:19.394 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef" Namespace="calico-system" Pod="goldmane-666569f655-lrfg9" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:19.415254 containerd[1624]: time="2025-11-01T00:23:19.415068413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:19.415386 containerd[1624]: time="2025-11-01T00:23:19.415206837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:19.416224 containerd[1624]: time="2025-11-01T00:23:19.415852249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:19.416224 containerd[1624]: time="2025-11-01T00:23:19.415973941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:19.440799 kubelet[2745]: E1101 00:23:19.440610 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:19.469925 kubelet[2745]: I1101 00:23:19.469827 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hbtgd" podStartSLOduration=38.469803893 podStartE2EDuration="38.469803893s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:19.445826707 +0000 UTC m=+44.584519191" watchObservedRunningTime="2025-11-01 00:23:19.469803893 +0000 UTC m=+44.608496377" Nov 1 00:23:19.532549 containerd[1624]: time="2025-11-01T00:23:19.532507615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lrfg9,Uid:5aad50b7-9c5b-4c75-b82d-9cd68d392290,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef\"" Nov 1 00:23:19.655038 containerd[1624]: time="2025-11-01T00:23:19.653596952Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:19.656174 containerd[1624]: time="2025-11-01T00:23:19.655864768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:19.657623 kubelet[2745]: E1101 00:23:19.657556 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:19.657742 kubelet[2745]: E1101 00:23:19.657628 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:19.658007 kubelet[2745]: E1101 00:23:19.657915 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:19.659922 containerd[1624]: time="2025-11-01T00:23:19.657189745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:19.659922 containerd[1624]: time="2025-11-01T00:23:19.658495567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:19.791213 systemd-networkd[1251]: calicddb88a7c00: Gained IPv6LL Nov 1 00:23:20.092556 containerd[1624]: time="2025-11-01T00:23:20.092498104Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.095147 containerd[1624]: time="2025-11-01T00:23:20.093823852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:20.095147 containerd[1624]: time="2025-11-01T00:23:20.094276336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:20.096189 kubelet[2745]: E1101 00:23:20.096143 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:20.096341 kubelet[2745]: E1101 00:23:20.096319 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:20.097052 kubelet[2745]: E1101 00:23:20.096964 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwr9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.099174 kubelet[2745]: E1101 00:23:20.099145 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:20.099667 containerd[1624]: time="2025-11-01T00:23:20.099619347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:20.302448 systemd-networkd[1251]: cali3cfa6e116e6: Gained IPv6LL Nov 1 00:23:20.367275 systemd-networkd[1251]: cali8d9beeed6b4: Gained IPv6LL Nov 1 00:23:20.444819 kubelet[2745]: E1101 00:23:20.444775 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:20.445517 kubelet[2745]: E1101 00:23:20.445037 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:20.558334 systemd-networkd[1251]: cali91f9ac355c4: Gained IPv6LL Nov 1 00:23:20.563446 containerd[1624]: time="2025-11-01T00:23:20.563392914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.564594 containerd[1624]: time="2025-11-01T00:23:20.564485638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:20.564594 containerd[1624]: time="2025-11-01T00:23:20.564542537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:20.564752 kubelet[2745]: E1101 00:23:20.564688 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:20.564752 kubelet[2745]: E1101 00:23:20.564737 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:20.565296 kubelet[2745]: E1101 00:23:20.564957 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.566185 kubelet[2745]: E1101 00:23:20.566097 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:20.567574 containerd[1624]: time="2025-11-01T00:23:20.567543772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:20.990533 containerd[1624]: time="2025-11-01T00:23:20.990488997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.992231 containerd[1624]: time="2025-11-01T00:23:20.991816450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:20.992231 containerd[1624]: time="2025-11-01T00:23:20.991891543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:20.992355 kubelet[2745]: E1101 00:23:20.992070 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:20.992355 kubelet[2745]: E1101 00:23:20.992140 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:20.992506 kubelet[2745]: E1101 00:23:20.992453 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.993580 kubelet[2745]: E1101 00:23:20.993538 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:21.326689 systemd-networkd[1251]: cali20d85c45234: Gained IPv6LL Nov 1 00:23:21.465567 kubelet[2745]: E1101 00:23:21.465516 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:21.466771 kubelet[2745]: E1101 00:23:21.466609 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:21.995370 containerd[1624]: time="2025-11-01T00:23:21.994645180Z" level=info msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" Nov 1 00:23:21.995370 containerd[1624]: time="2025-11-01T00:23:21.994684557Z" level=info msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.057 [INFO][4907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.057 [INFO][4907] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" iface="eth0" netns="/var/run/netns/cni-ebaa8a21-73d0-fd67-e0f0-e50e0ccff2e4" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.058 [INFO][4907] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" iface="eth0" netns="/var/run/netns/cni-ebaa8a21-73d0-fd67-e0f0-e50e0ccff2e4" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.058 [INFO][4907] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" iface="eth0" netns="/var/run/netns/cni-ebaa8a21-73d0-fd67-e0f0-e50e0ccff2e4" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.058 [INFO][4907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.058 [INFO][4907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.084 [INFO][4921] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.084 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.084 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.089 [WARNING][4921] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.089 [INFO][4921] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.094 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.101073 containerd[1624]: 2025-11-01 00:23:22.097 [INFO][4907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:22.102360 containerd[1624]: time="2025-11-01T00:23:22.102244025Z" level=info msg="TearDown network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" successfully" Nov 1 00:23:22.102360 containerd[1624]: time="2025-11-01T00:23:22.102271978Z" level=info msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" returns successfully" Nov 1 00:23:22.106680 systemd[1]: run-netns-cni\x2debaa8a21\x2d73d0\x2dfd67\x2de0f0\x2de50e0ccff2e4.mount: Deactivated successfully. Nov 1 00:23:22.107724 containerd[1624]: time="2025-11-01T00:23:22.106688460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n2bnk,Uid:3349d8c7-91f7-48f7-a15a-d52d578f2952,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.064 [INFO][4906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.064 [INFO][4906] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" iface="eth0" netns="/var/run/netns/cni-814c791c-1332-642c-3084-8a03dcd206e9" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.064 [INFO][4906] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" iface="eth0" netns="/var/run/netns/cni-814c791c-1332-642c-3084-8a03dcd206e9" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.065 [INFO][4906] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" iface="eth0" netns="/var/run/netns/cni-814c791c-1332-642c-3084-8a03dcd206e9" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.065 [INFO][4906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.065 [INFO][4906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.091 [INFO][4926] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.091 [INFO][4926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.094 [INFO][4926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.101 [WARNING][4926] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.101 [INFO][4926] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.104 [INFO][4926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.111511 containerd[1624]: 2025-11-01 00:23:22.108 [INFO][4906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:22.112729 containerd[1624]: time="2025-11-01T00:23:22.111636786Z" level=info msg="TearDown network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" successfully" Nov 1 00:23:22.112729 containerd[1624]: time="2025-11-01T00:23:22.111655812Z" level=info msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" returns successfully" Nov 1 00:23:22.112729 containerd[1624]: time="2025-11-01T00:23:22.112408517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-sqhhf,Uid:86457ed6-a969-4f17-a69a-681dcab352cc,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:22.116835 systemd[1]: run-netns-cni\x2d814c791c\x2d1332\x2d642c\x2d3084\x2d8a03dcd206e9.mount: Deactivated successfully. Nov 1 00:23:22.264290 systemd-networkd[1251]: calidc88fffd1ca: Link UP Nov 1 00:23:22.266447 systemd-networkd[1251]: calidc88fffd1ca: Gained carrier Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.162 [INFO][4934] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.173 [INFO][4934] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0 coredns-668d6bf9bc- kube-system 3349d8c7-91f7-48f7-a15a-d52d578f2952 985 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a coredns-668d6bf9bc-n2bnk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidc88fffd1ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.174 [INFO][4934] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.212 [INFO][4961] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" HandleID="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.212 [INFO][4961] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" HandleID="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-b21903d23a", "pod":"coredns-668d6bf9bc-n2bnk", "timestamp":"2025-11-01 00:23:22.212249116 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.212 [INFO][4961] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.212 [INFO][4961] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.212 [INFO][4961] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.218 [INFO][4961] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.228 [INFO][4961] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.234 [INFO][4961] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.236 [INFO][4961] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.238 [INFO][4961] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.239 [INFO][4961] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.240 [INFO][4961] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28 Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.245 [INFO][4961] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.252 [INFO][4961] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.199/26] block=192.168.113.192/26 handle="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.253 [INFO][4961] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.199/26] handle="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.253 [INFO][4961] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.281940 containerd[1624]: 2025-11-01 00:23:22.253 [INFO][4961] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.199/26] IPv6=[] ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" HandleID="k8s-pod-network.d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.257 [INFO][4934] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3349d8c7-91f7-48f7-a15a-d52d578f2952", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"coredns-668d6bf9bc-n2bnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc88fffd1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.257 [INFO][4934] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.199/32] ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.257 [INFO][4934] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc88fffd1ca ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.264 [INFO][4934] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.265 [INFO][4934] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3349d8c7-91f7-48f7-a15a-d52d578f2952", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28", Pod:"coredns-668d6bf9bc-n2bnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc88fffd1ca", MAC:"f2:fa:03:ca:c5:de", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.282657 containerd[1624]: 2025-11-01 00:23:22.277 [INFO][4934] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28" Namespace="kube-system" Pod="coredns-668d6bf9bc-n2bnk" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:22.300068 containerd[1624]: time="2025-11-01T00:23:22.299913487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:22.300454 containerd[1624]: time="2025-11-01T00:23:22.300325593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:22.300454 containerd[1624]: time="2025-11-01T00:23:22.300401468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:22.300759 containerd[1624]: time="2025-11-01T00:23:22.300637699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:22.369173 containerd[1624]: time="2025-11-01T00:23:22.369101497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n2bnk,Uid:3349d8c7-91f7-48f7-a15a-d52d578f2952,Namespace:kube-system,Attempt:1,} returns sandbox id \"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28\"" Nov 1 00:23:22.372497 systemd-networkd[1251]: cali673a621fb1d: Link UP Nov 1 00:23:22.372669 systemd-networkd[1251]: cali673a621fb1d: Gained carrier Nov 1 00:23:22.376699 containerd[1624]: time="2025-11-01T00:23:22.376663226Z" level=info msg="CreateContainer within sandbox \"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.172 [INFO][4942] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.186 [INFO][4942] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0 calico-apiserver-5cd88c66c7- calico-apiserver 86457ed6-a969-4f17-a69a-681dcab352cc 986 0 2025-11-01 00:22:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cd88c66c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-b21903d23a calico-apiserver-5cd88c66c7-sqhhf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali673a621fb1d [] [] }} ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.187 [INFO][4942] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.218 [INFO][4966] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" HandleID="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.218 [INFO][4966] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" HandleID="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-b21903d23a", "pod":"calico-apiserver-5cd88c66c7-sqhhf", "timestamp":"2025-11-01 00:23:22.218401257 +0000 UTC"}, Hostname:"ci-4081-3-6-n-b21903d23a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.218 [INFO][4966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.253 [INFO][4966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.253 [INFO][4966] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-b21903d23a' Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.324 [INFO][4966] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.330 [INFO][4966] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.335 [INFO][4966] ipam/ipam.go 511: Trying affinity for 192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.337 [INFO][4966] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.339 [INFO][4966] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.192/26 host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.339 [INFO][4966] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.192/26 handle="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.340 [INFO][4966] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693 Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.344 [INFO][4966] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.192/26 handle="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.354 [INFO][4966] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.200/26] block=192.168.113.192/26 handle="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.355 [INFO][4966] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.200/26] handle="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" host="ci-4081-3-6-n-b21903d23a" Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.355 [INFO][4966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.394627 containerd[1624]: 2025-11-01 00:23:22.355 [INFO][4966] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.200/26] IPv6=[] ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" HandleID="k8s-pod-network.c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.360 [INFO][4942] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"86457ed6-a969-4f17-a69a-681dcab352cc", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"", Pod:"calico-apiserver-5cd88c66c7-sqhhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali673a621fb1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.360 [INFO][4942] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.200/32] ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.360 [INFO][4942] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali673a621fb1d ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.376 [INFO][4942] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.378 [INFO][4942] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"86457ed6-a969-4f17-a69a-681dcab352cc", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693", Pod:"calico-apiserver-5cd88c66c7-sqhhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali673a621fb1d", MAC:"f6:86:26:ce:05:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.395171 containerd[1624]: 2025-11-01 00:23:22.392 [INFO][4942] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693" Namespace="calico-apiserver" Pod="calico-apiserver-5cd88c66c7-sqhhf" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:22.397782 containerd[1624]: time="2025-11-01T00:23:22.397718034Z" level=info msg="CreateContainer within sandbox \"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3070d269ea5edde7a2ec420496df337f9d31bb47056c9ab76a233d4421bbfd40\"" Nov 1 00:23:22.399349 containerd[1624]: time="2025-11-01T00:23:22.398971844Z" level=info msg="StartContainer for \"3070d269ea5edde7a2ec420496df337f9d31bb47056c9ab76a233d4421bbfd40\"" Nov 1 00:23:22.442394 containerd[1624]: time="2025-11-01T00:23:22.442226906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:22.442394 containerd[1624]: time="2025-11-01T00:23:22.442279084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:22.442394 containerd[1624]: time="2025-11-01T00:23:22.442292059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:22.443697 containerd[1624]: time="2025-11-01T00:23:22.443582811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:22.488825 containerd[1624]: time="2025-11-01T00:23:22.488783490Z" level=info msg="StartContainer for \"3070d269ea5edde7a2ec420496df337f9d31bb47056c9ab76a233d4421bbfd40\" returns successfully" Nov 1 00:23:22.507062 containerd[1624]: time="2025-11-01T00:23:22.507019945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cd88c66c7-sqhhf,Uid:86457ed6-a969-4f17-a69a-681dcab352cc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693\"" Nov 1 00:23:22.508655 containerd[1624]: time="2025-11-01T00:23:22.508616138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:22.941695 containerd[1624]: time="2025-11-01T00:23:22.941606907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:22.943691 containerd[1624]: time="2025-11-01T00:23:22.943574850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:22.943913 containerd[1624]: time="2025-11-01T00:23:22.943623563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:22.943978 kubelet[2745]: E1101 00:23:22.943905 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:22.944499 kubelet[2745]: E1101 00:23:22.943995 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:22.944499 kubelet[2745]: E1101 00:23:22.944229 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bh9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:22.945567 kubelet[2745]: E1101 00:23:22.945473 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:23.475467 kubelet[2745]: E1101 00:23:23.475307 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:23.513980 kubelet[2745]: I1101 00:23:23.512724 2745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n2bnk" podStartSLOduration=42.512704865 podStartE2EDuration="42.512704865s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:23.510550447 +0000 UTC m=+48.649242953" watchObservedRunningTime="2025-11-01 00:23:23.512704865 +0000 UTC m=+48.651397360" Nov 1 00:23:23.823381 systemd-networkd[1251]: cali673a621fb1d: Gained IPv6LL Nov 1 00:23:24.142351 systemd-networkd[1251]: calidc88fffd1ca: Gained IPv6LL Nov 1 00:23:24.486620 kubelet[2745]: E1101 00:23:24.486120 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:26.563777 kubelet[2745]: I1101 00:23:26.563403 2745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:27.738430 kernel: bpftool[5243]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:23:27.975466 systemd-networkd[1251]: vxlan.calico: Link UP Nov 1 00:23:27.975475 systemd-networkd[1251]: vxlan.calico: Gained carrier Nov 1 00:23:28.995013 containerd[1624]: time="2025-11-01T00:23:28.993953496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:29.426869 containerd[1624]: time="2025-11-01T00:23:29.426720370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:29.428344 containerd[1624]: time="2025-11-01T00:23:29.428268245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:29.428344 containerd[1624]: time="2025-11-01T00:23:29.428301275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:29.429013 kubelet[2745]: E1101 00:23:29.428611 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:29.429013 kubelet[2745]: E1101 00:23:29.428669 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:29.429013 kubelet[2745]: E1101 00:23:29.428800 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9a93a7233c9461ab5447c8e9d685214,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:29.432483 containerd[1624]: time="2025-11-01T00:23:29.432430787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:29.869132 containerd[1624]: time="2025-11-01T00:23:29.869069681Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:29.870182 containerd[1624]: time="2025-11-01T00:23:29.870141420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:29.870290 containerd[1624]: time="2025-11-01T00:23:29.870210785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:29.870465 kubelet[2745]: E1101 00:23:29.870419 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:29.870533 kubelet[2745]: E1101 00:23:29.870480 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:29.870887 kubelet[2745]: E1101 00:23:29.870599 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:29.872187 kubelet[2745]: E1101 00:23:29.872118 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:23:29.966465 systemd-networkd[1251]: vxlan.calico: Gained IPv6LL Nov 1 00:23:31.993544 containerd[1624]: time="2025-11-01T00:23:31.992971117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:32.416748 containerd[1624]: time="2025-11-01T00:23:32.416567284Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:32.418020 containerd[1624]: time="2025-11-01T00:23:32.417960613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:32.418080 containerd[1624]: time="2025-11-01T00:23:32.418047450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:32.418303 kubelet[2745]: E1101 00:23:32.418207 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:32.418303 kubelet[2745]: E1101 00:23:32.418272 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:32.418774 kubelet[2745]: E1101 00:23:32.418403 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfdz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:32.419747 kubelet[2745]: E1101 00:23:32.419655 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:33.994065 containerd[1624]: time="2025-11-01T00:23:33.993697402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:34.663300 containerd[1624]: time="2025-11-01T00:23:34.663230115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:34.664748 containerd[1624]: time="2025-11-01T00:23:34.664640543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:34.664748 containerd[1624]: time="2025-11-01T00:23:34.664673342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:34.664940 kubelet[2745]: E1101 00:23:34.664873 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:34.665403 kubelet[2745]: E1101 00:23:34.664952 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:34.665403 kubelet[2745]: E1101 00:23:34.665194 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:34.666706 kubelet[2745]: E1101 00:23:34.666440 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:35.002512 containerd[1624]: time="2025-11-01T00:23:35.002395224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:35.015513 containerd[1624]: time="2025-11-01T00:23:35.013213847Z" level=info msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.069 [WARNING][5386] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.069 [INFO][5386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.069 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" iface="eth0" netns="" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.069 [INFO][5386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.069 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.088 [INFO][5394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.089 [INFO][5394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.089 [INFO][5394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.097 [WARNING][5394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.097 [INFO][5394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.099 [INFO][5394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.103036 containerd[1624]: 2025-11-01 00:23:35.100 [INFO][5386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.103036 containerd[1624]: time="2025-11-01T00:23:35.103061676Z" level=info msg="TearDown network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" successfully" Nov 1 00:23:35.104912 containerd[1624]: time="2025-11-01T00:23:35.103084588Z" level=info msg="StopPodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" returns successfully" Nov 1 00:23:35.104912 containerd[1624]: time="2025-11-01T00:23:35.104575715Z" level=info msg="RemovePodSandbox for \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" Nov 1 00:23:35.104912 containerd[1624]: time="2025-11-01T00:23:35.104603345Z" level=info msg="Forcibly stopping sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\"" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.134 [WARNING][5408] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" WorkloadEndpoint="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.134 [INFO][5408] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.134 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" iface="eth0" netns="" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.134 [INFO][5408] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.134 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.155 [INFO][5415] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.155 [INFO][5415] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.155 [INFO][5415] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.160 [WARNING][5415] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.160 [INFO][5415] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" HandleID="k8s-pod-network.69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Workload="ci--4081--3--6--n--b21903d23a-k8s-whisker--dd6f966--7pmfv-eth0" Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.161 [INFO][5415] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.166299 containerd[1624]: 2025-11-01 00:23:35.163 [INFO][5408] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24" Nov 1 00:23:35.166299 containerd[1624]: time="2025-11-01T00:23:35.165018306Z" level=info msg="TearDown network for sandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" successfully" Nov 1 00:23:35.175997 containerd[1624]: time="2025-11-01T00:23:35.175949052Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.176174 containerd[1624]: time="2025-11-01T00:23:35.176016134Z" level=info msg="RemovePodSandbox \"69a5891ad677161c651c0e6b41ba424db960bc2d6746e82a4d2068cfbbf75b24\" returns successfully" Nov 1 00:23:35.176566 containerd[1624]: time="2025-11-01T00:23:35.176546667Z" level=info msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.206 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5aad50b7-9c5b-4c75-b82d-9cd68d392290", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef", Pod:"goldmane-666569f655-lrfg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d85c45234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.206 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.206 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" iface="eth0" netns="" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.207 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.207 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.228 [INFO][5437] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.228 [INFO][5437] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.228 [INFO][5437] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.234 [WARNING][5437] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.234 [INFO][5437] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.236 [INFO][5437] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.239972 containerd[1624]: 2025-11-01 00:23:35.238 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.241183 containerd[1624]: time="2025-11-01T00:23:35.240024418Z" level=info msg="TearDown network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" successfully" Nov 1 00:23:35.241183 containerd[1624]: time="2025-11-01T00:23:35.240048702Z" level=info msg="StopPodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" returns successfully" Nov 1 00:23:35.241183 containerd[1624]: time="2025-11-01T00:23:35.240748913Z" level=info msg="RemovePodSandbox for \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" Nov 1 00:23:35.241183 containerd[1624]: time="2025-11-01T00:23:35.240774570Z" level=info msg="Forcibly stopping sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\"" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.283 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5aad50b7-9c5b-4c75-b82d-9cd68d392290", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"7e40448e9fd8a50dc54c7c5bc7076ebd06fbe1a6b45d55c75252309e90a917ef", Pod:"goldmane-666569f655-lrfg9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali20d85c45234", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.284 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.284 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" iface="eth0" netns="" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.284 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.284 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.303 [INFO][5458] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.303 [INFO][5458] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.303 [INFO][5458] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.309 [WARNING][5458] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.309 [INFO][5458] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" HandleID="k8s-pod-network.60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Workload="ci--4081--3--6--n--b21903d23a-k8s-goldmane--666569f655--lrfg9-eth0" Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.311 [INFO][5458] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.315945 containerd[1624]: 2025-11-01 00:23:35.312 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb" Nov 1 00:23:35.315945 containerd[1624]: time="2025-11-01T00:23:35.314852747Z" level=info msg="TearDown network for sandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" successfully" Nov 1 00:23:35.318946 containerd[1624]: time="2025-11-01T00:23:35.318753950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.318946 containerd[1624]: time="2025-11-01T00:23:35.318832472Z" level=info msg="RemovePodSandbox \"60c4ab7496bf23789205b409c19706adcdbea7c527757e7828b28dc8721c5bfb\" returns successfully" Nov 1 00:23:35.319365 containerd[1624]: time="2025-11-01T00:23:35.319338141Z" level=info msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.351 [WARNING][5472] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3349d8c7-91f7-48f7-a15a-d52d578f2952", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28", Pod:"coredns-668d6bf9bc-n2bnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc88fffd1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.352 [INFO][5472] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.352 [INFO][5472] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" iface="eth0" netns="" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.352 [INFO][5472] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.352 [INFO][5472] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.376 [INFO][5480] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.376 [INFO][5480] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.376 [INFO][5480] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.382 [WARNING][5480] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.383 [INFO][5480] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.384 [INFO][5480] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.390323 containerd[1624]: 2025-11-01 00:23:35.387 [INFO][5472] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.392015 containerd[1624]: time="2025-11-01T00:23:35.390393890Z" level=info msg="TearDown network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" successfully" Nov 1 00:23:35.392015 containerd[1624]: time="2025-11-01T00:23:35.390444393Z" level=info msg="StopPodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" returns successfully" Nov 1 00:23:35.392015 containerd[1624]: time="2025-11-01T00:23:35.391393175Z" level=info msg="RemovePodSandbox for \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" Nov 1 00:23:35.392015 containerd[1624]: time="2025-11-01T00:23:35.391448956Z" level=info msg="Forcibly stopping sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\"" Nov 1 00:23:35.431934 containerd[1624]: time="2025-11-01T00:23:35.431719302Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:35.433122 containerd[1624]: time="2025-11-01T00:23:35.433054676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:35.433373 containerd[1624]: time="2025-11-01T00:23:35.433247697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:35.433651 kubelet[2745]: E1101 00:23:35.433610 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:35.433935 kubelet[2745]: E1101 00:23:35.433798 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:35.434073 kubelet[2745]: E1101 00:23:35.434011 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:35.437459 containerd[1624]: time="2025-11-01T00:23:35.437423970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.429 [WARNING][5494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3349d8c7-91f7-48f7-a15a-d52d578f2952", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"d3315d851b4e817e531f3e78d42823c0b04130deee2dafe24e9dcb9a98866f28", Pod:"coredns-668d6bf9bc-n2bnk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc88fffd1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.430 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.430 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" iface="eth0" netns="" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.430 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.430 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.457 [INFO][5501] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.457 [INFO][5501] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.457 [INFO][5501] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.465 [WARNING][5501] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.465 [INFO][5501] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" HandleID="k8s-pod-network.61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--n2bnk-eth0" Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.467 [INFO][5501] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.470791 containerd[1624]: 2025-11-01 00:23:35.469 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030" Nov 1 00:23:35.471433 containerd[1624]: time="2025-11-01T00:23:35.470849978Z" level=info msg="TearDown network for sandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" successfully" Nov 1 00:23:35.473804 containerd[1624]: time="2025-11-01T00:23:35.473772044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.473902 containerd[1624]: time="2025-11-01T00:23:35.473819990Z" level=info msg="RemovePodSandbox \"61e09e1d46b239e33705fc0635410395fa72a1b54c99047c416399a919347030\" returns successfully" Nov 1 00:23:35.474391 containerd[1624]: time="2025-11-01T00:23:35.474368336Z" level=info msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.505 [WARNING][5516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"86457ed6-a969-4f17-a69a-681dcab352cc", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693", Pod:"calico-apiserver-5cd88c66c7-sqhhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali673a621fb1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.505 [INFO][5516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.505 [INFO][5516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" iface="eth0" netns="" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.505 [INFO][5516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.505 [INFO][5516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.527 [INFO][5524] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.527 [INFO][5524] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.527 [INFO][5524] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.533 [WARNING][5524] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.533 [INFO][5524] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.534 [INFO][5524] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.538084 containerd[1624]: 2025-11-01 00:23:35.536 [INFO][5516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.538084 containerd[1624]: time="2025-11-01T00:23:35.537936050Z" level=info msg="TearDown network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" successfully" Nov 1 00:23:35.538084 containerd[1624]: time="2025-11-01T00:23:35.537963901Z" level=info msg="StopPodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" returns successfully" Nov 1 00:23:35.539202 containerd[1624]: time="2025-11-01T00:23:35.539172886Z" level=info msg="RemovePodSandbox for \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" Nov 1 00:23:35.539202 containerd[1624]: time="2025-11-01T00:23:35.539202890Z" level=info msg="Forcibly stopping sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\"" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.573 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"86457ed6-a969-4f17-a69a-681dcab352cc", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"c69c83c4fd1bf99afdae9a1bfed239449cd0d42c079ebd59809935dae6b9f693", Pod:"calico-apiserver-5cd88c66c7-sqhhf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali673a621fb1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.573 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.573 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" iface="eth0" netns="" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.573 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.573 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.596 [INFO][5545] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.596 [INFO][5545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.596 [INFO][5545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.603 [WARNING][5545] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.603 [INFO][5545] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" HandleID="k8s-pod-network.a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--sqhhf-eth0" Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.606 [INFO][5545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.610515 containerd[1624]: 2025-11-01 00:23:35.608 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7" Nov 1 00:23:35.610515 containerd[1624]: time="2025-11-01T00:23:35.610455538Z" level=info msg="TearDown network for sandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" successfully" Nov 1 00:23:35.615258 containerd[1624]: time="2025-11-01T00:23:35.615210991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.615317 containerd[1624]: time="2025-11-01T00:23:35.615288673Z" level=info msg="RemovePodSandbox \"a2992c2c5fc3bad38fa3c3f36e317636b8d6569ce316411632153682f736dbf7\" returns successfully" Nov 1 00:23:35.615880 containerd[1624]: time="2025-11-01T00:23:35.615828793Z" level=info msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.653 [WARNING][5559] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fb3e683-810b-4091-a4c8-6fa869de6607", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d", Pod:"csi-node-driver-jnx62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cfa6e116e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.654 [INFO][5559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.654 [INFO][5559] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" iface="eth0" netns="" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.654 [INFO][5559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.654 [INFO][5559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.672 [INFO][5566] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.672 [INFO][5566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.672 [INFO][5566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.677 [WARNING][5566] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.677 [INFO][5566] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.679 [INFO][5566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.682544 containerd[1624]: 2025-11-01 00:23:35.680 [INFO][5559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.682544 containerd[1624]: time="2025-11-01T00:23:35.682388300Z" level=info msg="TearDown network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" successfully" Nov 1 00:23:35.682544 containerd[1624]: time="2025-11-01T00:23:35.682423584Z" level=info msg="StopPodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" returns successfully" Nov 1 00:23:35.683261 containerd[1624]: time="2025-11-01T00:23:35.683158999Z" level=info msg="RemovePodSandbox for \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" Nov 1 00:23:35.683261 containerd[1624]: time="2025-11-01T00:23:35.683185618Z" level=info msg="Forcibly stopping sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\"" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.714 [WARNING][5580] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fb3e683-810b-4091-a4c8-6fa869de6607", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"0249643cb9defa62ecc6b28b29ff0064f1b70644a9ae03afbdf45723c056352d", Pod:"csi-node-driver-jnx62", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3cfa6e116e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.714 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.714 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" iface="eth0" netns="" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.714 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.714 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.730 [INFO][5587] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.730 [INFO][5587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.730 [INFO][5587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.735 [WARNING][5587] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.736 [INFO][5587] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" HandleID="k8s-pod-network.8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Workload="ci--4081--3--6--n--b21903d23a-k8s-csi--node--driver--jnx62-eth0" Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.737 [INFO][5587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.741100 containerd[1624]: 2025-11-01 00:23:35.739 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497" Nov 1 00:23:35.741521 containerd[1624]: time="2025-11-01T00:23:35.741155399Z" level=info msg="TearDown network for sandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" successfully" Nov 1 00:23:35.744527 containerd[1624]: time="2025-11-01T00:23:35.744496865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.744621 containerd[1624]: time="2025-11-01T00:23:35.744541306Z" level=info msg="RemovePodSandbox \"8655a5431f760bc5123fbd05379551b46eb4d3dd2cb92d72cbe723f52d44f497\" returns successfully" Nov 1 00:23:35.745284 containerd[1624]: time="2025-11-01T00:23:35.745054898Z" level=info msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.774 [WARNING][5601] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a5e4241-2b02-4d05-aee8-621954146083", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946", Pod:"calico-apiserver-5cd88c66c7-t86s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicddb88a7c00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.774 [INFO][5601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.774 [INFO][5601] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" iface="eth0" netns="" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.774 [INFO][5601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.774 [INFO][5601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.790 [INFO][5609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.791 [INFO][5609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.791 [INFO][5609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.796 [WARNING][5609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.796 [INFO][5609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.798 [INFO][5609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.803273 containerd[1624]: 2025-11-01 00:23:35.801 [INFO][5601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.804734 containerd[1624]: time="2025-11-01T00:23:35.803306231Z" level=info msg="TearDown network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" successfully" Nov 1 00:23:35.804734 containerd[1624]: time="2025-11-01T00:23:35.803340934Z" level=info msg="StopPodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" returns successfully" Nov 1 00:23:35.804734 containerd[1624]: time="2025-11-01T00:23:35.803945060Z" level=info msg="RemovePodSandbox for \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" Nov 1 00:23:35.804734 containerd[1624]: time="2025-11-01T00:23:35.804081338Z" level=info msg="Forcibly stopping sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\"" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.837 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0", GenerateName:"calico-apiserver-5cd88c66c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7a5e4241-2b02-4d05-aee8-621954146083", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cd88c66c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"297626ac4884760a258067076c7c1d5b23cc9f3d13df622dd4eaf235a380e946", Pod:"calico-apiserver-5cd88c66c7-t86s4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicddb88a7c00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.837 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.837 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" iface="eth0" netns="" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.837 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.837 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.856 [INFO][5630] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.856 [INFO][5630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.856 [INFO][5630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.861 [WARNING][5630] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.862 [INFO][5630] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" HandleID="k8s-pod-network.6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--apiserver--5cd88c66c7--t86s4-eth0" Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.863 [INFO][5630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.869363 containerd[1624]: 2025-11-01 00:23:35.865 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e" Nov 1 00:23:35.869363 containerd[1624]: time="2025-11-01T00:23:35.867493269Z" level=info msg="TearDown network for sandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" successfully" Nov 1 00:23:35.870443 containerd[1624]: time="2025-11-01T00:23:35.870416186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:35.871711 containerd[1624]: time="2025-11-01T00:23:35.871675833Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:35.871805 containerd[1624]: time="2025-11-01T00:23:35.871749317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:35.872112 kubelet[2745]: E1101 00:23:35.871892 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:35.872112 kubelet[2745]: E1101 00:23:35.871948 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:35.872112 kubelet[2745]: E1101 00:23:35.872068 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:35.873978 kubelet[2745]: E1101 00:23:35.873271 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:35.875073 containerd[1624]: time="2025-11-01T00:23:35.874534274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:35.875073 containerd[1624]: time="2025-11-01T00:23:35.874610240Z" level=info msg="RemovePodSandbox \"6d9389d3b86b0b786db3efcb74bada04ce61f1ee70858d6041bd10c3a9d3876e\" returns successfully" Nov 1 00:23:35.875768 containerd[1624]: time="2025-11-01T00:23:35.875434097Z" level=info msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.910 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0", GenerateName:"calico-kube-controllers-85dfcd4bbd-", Namespace:"calico-system", SelfLink:"", UID:"e33febfb-cf29-450e-a371-4a2c6d265345", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dfcd4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5", Pod:"calico-kube-controllers-85dfcd4bbd-qbgm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f9ac355c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.910 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.910 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" iface="eth0" netns="" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.910 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.910 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.931 [INFO][5653] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.931 [INFO][5653] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.931 [INFO][5653] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.936 [WARNING][5653] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.936 [INFO][5653] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.938 [INFO][5653] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:35.943207 containerd[1624]: 2025-11-01 00:23:35.940 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:35.943809 containerd[1624]: time="2025-11-01T00:23:35.943668073Z" level=info msg="TearDown network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" successfully" Nov 1 00:23:35.943809 containerd[1624]: time="2025-11-01T00:23:35.943697417Z" level=info msg="StopPodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" returns successfully" Nov 1 00:23:35.944308 containerd[1624]: time="2025-11-01T00:23:35.944274845Z" level=info msg="RemovePodSandbox for \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" Nov 1 00:23:35.944308 containerd[1624]: time="2025-11-01T00:23:35.944308466Z" level=info msg="Forcibly stopping sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\"" Nov 1 00:23:35.994050 containerd[1624]: time="2025-11-01T00:23:35.993995388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:35.983 [WARNING][5668] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0", GenerateName:"calico-kube-controllers-85dfcd4bbd-", Namespace:"calico-system", SelfLink:"", UID:"e33febfb-cf29-450e-a371-4a2c6d265345", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85dfcd4bbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"bbccba43afb317df9173a3f8d39bcff807d4a541c1fa0e239d1bffbb06dd3df5", Pod:"calico-kube-controllers-85dfcd4bbd-qbgm9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali91f9ac355c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:35.984 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:35.984 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" iface="eth0" netns="" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:35.984 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:35.984 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.034 [INFO][5675] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.035 [INFO][5675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.035 [INFO][5675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.041 [WARNING][5675] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.041 [INFO][5675] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" HandleID="k8s-pod-network.7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Workload="ci--4081--3--6--n--b21903d23a-k8s-calico--kube--controllers--85dfcd4bbd--qbgm9-eth0" Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.043 [INFO][5675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.046966 containerd[1624]: 2025-11-01 00:23:36.045 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3" Nov 1 00:23:36.047611 containerd[1624]: time="2025-11-01T00:23:36.047023133Z" level=info msg="TearDown network for sandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" successfully" Nov 1 00:23:36.052153 containerd[1624]: time="2025-11-01T00:23:36.050812853Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.052153 containerd[1624]: time="2025-11-01T00:23:36.050864667Z" level=info msg="RemovePodSandbox \"7f5d4760ec722aad9edcd435aef28aaceb73e701010ebb16f30e626247defac3\" returns successfully" Nov 1 00:23:36.052153 containerd[1624]: time="2025-11-01T00:23:36.051340171Z" level=info msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.080 [WARNING][5690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0", Pod:"coredns-668d6bf9bc-hbtgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9beeed6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.081 [INFO][5690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.081 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" iface="eth0" netns="" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.081 [INFO][5690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.081 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.106 [INFO][5697] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.107 [INFO][5697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.107 [INFO][5697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.115 [WARNING][5697] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.115 [INFO][5697] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.117 [INFO][5697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.123807 containerd[1624]: 2025-11-01 00:23:36.121 [INFO][5690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.123807 containerd[1624]: time="2025-11-01T00:23:36.123765467Z" level=info msg="TearDown network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" successfully" Nov 1 00:23:36.123807 containerd[1624]: time="2025-11-01T00:23:36.123799979Z" level=info msg="StopPodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" returns successfully" Nov 1 00:23:36.126819 containerd[1624]: time="2025-11-01T00:23:36.124355048Z" level=info msg="RemovePodSandbox for \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" Nov 1 00:23:36.126819 containerd[1624]: time="2025-11-01T00:23:36.124386966Z" level=info msg="Forcibly stopping sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\"" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.164 [WARNING][5711] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"16c60fed-179e-4b9b-b5f3-3af5fa94c7e7", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-b21903d23a", ContainerID:"47fb35e8323f6d0d9272b506ce26863a05021987cec71f1a6cf64864c7ab15c0", Pod:"coredns-668d6bf9bc-hbtgd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8d9beeed6b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.164 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.164 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" iface="eth0" netns="" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.164 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.164 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.185 [INFO][5719] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.185 [INFO][5719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.185 [INFO][5719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.191 [WARNING][5719] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.191 [INFO][5719] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" HandleID="k8s-pod-network.e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Workload="ci--4081--3--6--n--b21903d23a-k8s-coredns--668d6bf9bc--hbtgd-eth0" Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.193 [INFO][5719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.196432 containerd[1624]: 2025-11-01 00:23:36.194 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878" Nov 1 00:23:36.197318 containerd[1624]: time="2025-11-01T00:23:36.196481541Z" level=info msg="TearDown network for sandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" successfully" Nov 1 00:23:36.199980 containerd[1624]: time="2025-11-01T00:23:36.199947141Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.200046 containerd[1624]: time="2025-11-01T00:23:36.200001831Z" level=info msg="RemovePodSandbox \"e0d65a30fb6e9394ff5964eede1a2f27df5a4285178739262c56c19dd30a4878\" returns successfully" Nov 1 00:23:36.434403 containerd[1624]: time="2025-11-01T00:23:36.434339835Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:36.435740 containerd[1624]: time="2025-11-01T00:23:36.435668439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:36.435837 containerd[1624]: time="2025-11-01T00:23:36.435755758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:36.436120 kubelet[2745]: E1101 00:23:36.436065 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:36.436227 kubelet[2745]: E1101 00:23:36.436149 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:36.436704 kubelet[2745]: E1101 00:23:36.436285 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwr9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.437965 kubelet[2745]: E1101 00:23:36.437930 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:38.995429 containerd[1624]: time="2025-11-01T00:23:38.994868963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:39.443514 containerd[1624]: time="2025-11-01T00:23:39.443434834Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:39.445379 containerd[1624]: time="2025-11-01T00:23:39.445266697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:39.445379 containerd[1624]: time="2025-11-01T00:23:39.445339309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:39.445714 kubelet[2745]: E1101 00:23:39.445647 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.445714 kubelet[2745]: E1101 00:23:39.445700 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.446678 kubelet[2745]: E1101 00:23:39.445893 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bh9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.447524 kubelet[2745]: E1101 00:23:39.447440 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:42.418104 systemd[1]: run-containerd-runc-k8s.io-6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c-runc.X74la9.mount: Deactivated successfully. Nov 1 00:23:45.008758 kubelet[2745]: E1101 00:23:45.008349 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:23:46.995057 kubelet[2745]: E1101 00:23:46.995003 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:23:46.998282 kubelet[2745]: E1101 00:23:46.998064 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:23:48.998671 kubelet[2745]: E1101 00:23:48.998610 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:23:50.995974 kubelet[2745]: E1101 00:23:50.994266 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:23:53.993239 kubelet[2745]: E1101 00:23:53.993186 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:23:58.997433 containerd[1624]: time="2025-11-01T00:23:58.997346150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:59.436279 containerd[1624]: time="2025-11-01T00:23:59.436072083Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:59.437356 containerd[1624]: time="2025-11-01T00:23:59.437320229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:59.437549 containerd[1624]: time="2025-11-01T00:23:59.437406739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:59.437692 kubelet[2745]: E1101 00:23:59.437656 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:59.438065 kubelet[2745]: E1101 00:23:59.437704 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:59.438065 kubelet[2745]: E1101 00:23:59.437804 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9a93a7233c9461ab5447c8e9d685214,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:59.442967 containerd[1624]: time="2025-11-01T00:23:59.442895323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:00.040433 containerd[1624]: time="2025-11-01T00:24:00.040109169Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:00.041875 containerd[1624]: time="2025-11-01T00:24:00.041736711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:00.041875 containerd[1624]: time="2025-11-01T00:24:00.041848087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:00.043082 kubelet[2745]: E1101 00:24:00.042063 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:00.043082 kubelet[2745]: E1101 00:24:00.042113 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:00.043082 kubelet[2745]: E1101 00:24:00.042323 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:00.043492 containerd[1624]: time="2025-11-01T00:24:00.043473425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:00.043826 kubelet[2745]: E1101 00:24:00.043662 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:24:00.489716 containerd[1624]: time="2025-11-01T00:24:00.489661112Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:00.490994 containerd[1624]: time="2025-11-01T00:24:00.490953210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:00.491177 containerd[1624]: time="2025-11-01T00:24:00.491034321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:00.491253 kubelet[2745]: E1101 00:24:00.491190 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:00.491706 kubelet[2745]: E1101 00:24:00.491247 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:00.491706 kubelet[2745]: E1101 00:24:00.491369 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfdz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:00.492888 kubelet[2745]: E1101 00:24:00.492854 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:24:00.994986 containerd[1624]: time="2025-11-01T00:24:00.994350657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:01.434474 containerd[1624]: time="2025-11-01T00:24:01.433049085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:01.435638 containerd[1624]: time="2025-11-01T00:24:01.435598080Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:01.438153 kubelet[2745]: E1101 00:24:01.437462 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:01.438153 kubelet[2745]: E1101 00:24:01.437515 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:01.438153 kubelet[2745]: E1101 00:24:01.437640 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:01.441148 kubelet[2745]: E1101 00:24:01.439254 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:24:01.447158 containerd[1624]: time="2025-11-01T00:24:01.437183404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:01.993475 containerd[1624]: time="2025-11-01T00:24:01.993047008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:02.427050 containerd[1624]: time="2025-11-01T00:24:02.426867348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:02.428296 containerd[1624]: time="2025-11-01T00:24:02.428227856Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:02.428414 containerd[1624]: time="2025-11-01T00:24:02.428359711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:02.428702 kubelet[2745]: E1101 00:24:02.428641 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:02.429188 kubelet[2745]: E1101 00:24:02.428711 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:02.429188 kubelet[2745]: E1101 00:24:02.428889 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:02.431537 containerd[1624]: time="2025-11-01T00:24:02.431489468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:02.859198 containerd[1624]: time="2025-11-01T00:24:02.858920411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:02.860325 containerd[1624]: time="2025-11-01T00:24:02.860270339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:02.860658 containerd[1624]: time="2025-11-01T00:24:02.860457888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:02.860999 kubelet[2745]: E1101 00:24:02.860880 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:02.860999 kubelet[2745]: E1101 00:24:02.860963 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:02.861708 kubelet[2745]: E1101 00:24:02.861524 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:02.863282 kubelet[2745]: E1101 00:24:02.863211 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:24:03.994600 containerd[1624]: time="2025-11-01T00:24:03.994251785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:04.431097 containerd[1624]: time="2025-11-01T00:24:04.430867882Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:04.432099 containerd[1624]: time="2025-11-01T00:24:04.432056172Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:04.432268 containerd[1624]: time="2025-11-01T00:24:04.432151991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:04.432510 kubelet[2745]: E1101 00:24:04.432419 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:04.432510 kubelet[2745]: E1101 00:24:04.432483 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:04.433353 kubelet[2745]: E1101 00:24:04.432600 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwr9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:04.435615 kubelet[2745]: E1101 00:24:04.435525 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:24:06.997779 containerd[1624]: time="2025-11-01T00:24:06.997741266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:07.440244 containerd[1624]: time="2025-11-01T00:24:07.440053879Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:07.441572 containerd[1624]: time="2025-11-01T00:24:07.441402722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:07.441572 containerd[1624]: time="2025-11-01T00:24:07.441517897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:07.441828 kubelet[2745]: E1101 00:24:07.441746 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:07.442156 kubelet[2745]: E1101 00:24:07.441823 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:07.442156 kubelet[2745]: E1101 00:24:07.441987 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bh9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:07.443498 kubelet[2745]: E1101 00:24:07.443466 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:24:11.994601 kubelet[2745]: E1101 00:24:11.994509 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:24:11.998613 kubelet[2745]: E1101 00:24:11.998502 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:24:12.994261 kubelet[2745]: E1101 00:24:12.993908 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:24:15.998310 kubelet[2745]: E1101 00:24:15.998244 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:24:16.997925 kubelet[2745]: E1101 00:24:16.997864 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:24:21.993316 kubelet[2745]: E1101 00:24:21.992952 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:24:22.994704 kubelet[2745]: E1101 00:24:22.994450 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:24:25.993003 kubelet[2745]: E1101 00:24:25.992730 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:24:26.406045 systemd[1]: Started sshd@7-46.62.149.99:22-147.75.109.163:50528.service - OpenSSH per-connection server daemon (147.75.109.163:50528). Nov 1 00:24:26.993912 kubelet[2745]: E1101 00:24:26.993866 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:24:27.475192 sshd[5803]: Accepted publickey for core from 147.75.109.163 port 50528 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:27.477587 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:27.494495 systemd-logind[1610]: New session 8 of user core. Nov 1 00:24:27.500145 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:24:28.670608 sshd[5803]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:28.678485 systemd[1]: sshd@7-46.62.149.99:22-147.75.109.163:50528.service: Deactivated successfully. Nov 1 00:24:28.684180 systemd-logind[1610]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:24:28.687773 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:24:28.691775 systemd-logind[1610]: Removed session 8. Nov 1 00:24:30.995422 kubelet[2745]: E1101 00:24:30.995370 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:24:31.996419 kubelet[2745]: E1101 00:24:31.996344 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:24:32.996105 kubelet[2745]: E1101 00:24:32.995865 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:24:33.880409 systemd[1]: Started sshd@8-46.62.149.99:22-147.75.109.163:57392.service - OpenSSH per-connection server daemon (147.75.109.163:57392). Nov 1 00:24:33.995565 kubelet[2745]: E1101 00:24:33.995486 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:24:35.001309 sshd[5818]: Accepted publickey for core from 147.75.109.163 port 57392 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:35.001969 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:35.006727 systemd-logind[1610]: New session 9 of user core. Nov 1 00:24:35.011362 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:24:35.921437 sshd[5818]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:35.927122 systemd[1]: sshd@8-46.62.149.99:22-147.75.109.163:57392.service: Deactivated successfully. Nov 1 00:24:35.935394 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:24:35.935632 systemd-logind[1610]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:24:35.938329 systemd-logind[1610]: Removed session 9. Nov 1 00:24:36.109568 systemd[1]: Started sshd@9-46.62.149.99:22-147.75.109.163:57394.service - OpenSSH per-connection server daemon (147.75.109.163:57394). Nov 1 00:24:37.244207 sshd[5835]: Accepted publickey for core from 147.75.109.163 port 57394 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:37.245297 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:37.259460 systemd-logind[1610]: New session 10 of user core. Nov 1 00:24:37.263335 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:24:38.142554 sshd[5835]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:38.150757 systemd[1]: sshd@9-46.62.149.99:22-147.75.109.163:57394.service: Deactivated successfully. Nov 1 00:24:38.155817 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:24:38.156163 systemd-logind[1610]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:24:38.159003 systemd-logind[1610]: Removed session 10. Nov 1 00:24:38.303026 systemd[1]: Started sshd@10-46.62.149.99:22-147.75.109.163:57400.service - OpenSSH per-connection server daemon (147.75.109.163:57400). Nov 1 00:24:39.327950 sshd[5847]: Accepted publickey for core from 147.75.109.163 port 57400 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:39.329586 sshd[5847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:39.333675 systemd-logind[1610]: New session 11 of user core. Nov 1 00:24:39.336597 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:24:39.995047 kubelet[2745]: E1101 00:24:39.995009 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:24:40.154297 sshd[5847]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:40.157490 systemd[1]: sshd@10-46.62.149.99:22-147.75.109.163:57400.service: Deactivated successfully. Nov 1 00:24:40.167408 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:24:40.168670 systemd-logind[1610]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:24:40.169722 systemd-logind[1610]: Removed session 11. Nov 1 00:24:40.993905 kubelet[2745]: E1101 00:24:40.993817 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:24:42.416205 systemd[1]: run-containerd-runc-k8s.io-6e0cf0686fb2f6cb2b56a47e33262020e059e8f679a12683c61605d9c7c4400c-runc.ehUHTt.mount: Deactivated successfully. Nov 1 00:24:42.992956 containerd[1624]: time="2025-11-01T00:24:42.992704002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:43.430457 containerd[1624]: time="2025-11-01T00:24:43.429993901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:43.437032 containerd[1624]: time="2025-11-01T00:24:43.436960849Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:43.437887 containerd[1624]: time="2025-11-01T00:24:43.437451343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:43.437939 kubelet[2745]: E1101 00:24:43.437732 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:43.438734 kubelet[2745]: E1101 00:24:43.438370 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:43.438734 kubelet[2745]: E1101 00:24:43.438503 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:43.442011 containerd[1624]: time="2025-11-01T00:24:43.441551156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:43.870356 containerd[1624]: time="2025-11-01T00:24:43.870273485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:43.871719 containerd[1624]: time="2025-11-01T00:24:43.871510684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:43.871719 containerd[1624]: time="2025-11-01T00:24:43.871591225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:43.872064 kubelet[2745]: E1101 00:24:43.871864 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:43.872064 kubelet[2745]: E1101 00:24:43.872003 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:43.872455 kubelet[2745]: E1101 00:24:43.872414 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jnx62_calico-system(2fb3e683-810b-4091-a4c8-6fa869de6607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:43.873777 kubelet[2745]: E1101 00:24:43.873750 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:24:44.995723 containerd[1624]: time="2025-11-01T00:24:44.994398415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:44.996898 kubelet[2745]: E1101 00:24:44.994601 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:24:45.320483 systemd[1]: Started sshd@11-46.62.149.99:22-147.75.109.163:50018.service - OpenSSH per-connection server daemon (147.75.109.163:50018). Nov 1 00:24:45.432663 containerd[1624]: time="2025-11-01T00:24:45.432582939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:45.435776 containerd[1624]: time="2025-11-01T00:24:45.435609334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:45.435776 containerd[1624]: time="2025-11-01T00:24:45.435702330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:45.436498 kubelet[2745]: E1101 00:24:45.436059 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:45.436498 kubelet[2745]: E1101 00:24:45.436203 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:45.436498 kubelet[2745]: E1101 00:24:45.436342 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jwr9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85dfcd4bbd-qbgm9_calico-system(e33febfb-cf29-450e-a371-4a2c6d265345): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:45.437840 kubelet[2745]: E1101 00:24:45.437763 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:24:46.326968 sshd[5890]: Accepted publickey for core from 147.75.109.163 port 50018 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:46.327238 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:46.332325 systemd-logind[1610]: New session 12 of user core. Nov 1 00:24:46.339430 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:24:47.134392 sshd[5890]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:47.137121 systemd-logind[1610]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:24:47.140606 systemd[1]: sshd@11-46.62.149.99:22-147.75.109.163:50018.service: Deactivated successfully. Nov 1 00:24:47.145000 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:24:47.146270 systemd-logind[1610]: Removed session 12. Nov 1 00:24:48.992866 containerd[1624]: time="2025-11-01T00:24:48.992758893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:49.436140 containerd[1624]: time="2025-11-01T00:24:49.436073398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:49.437270 containerd[1624]: time="2025-11-01T00:24:49.437227031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:49.437400 containerd[1624]: time="2025-11-01T00:24:49.437234334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:49.437452 kubelet[2745]: E1101 00:24:49.437419 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:49.438186 kubelet[2745]: E1101 00:24:49.437464 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:49.438186 kubelet[2745]: E1101 00:24:49.437612 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b9a93a7233c9461ab5447c8e9d685214,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:49.440108 containerd[1624]: time="2025-11-01T00:24:49.440058312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:49.890112 containerd[1624]: time="2025-11-01T00:24:49.889893469Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:49.893652 containerd[1624]: time="2025-11-01T00:24:49.893414109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:49.893652 containerd[1624]: time="2025-11-01T00:24:49.893564271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:49.894032 kubelet[2745]: E1101 00:24:49.893900 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:49.894302 kubelet[2745]: E1101 00:24:49.894216 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:49.894445 kubelet[2745]: E1101 00:24:49.894369 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kg6sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5bd87784b4-tjjnp_calico-system(ceed905b-f8f5-47a1-9eef-2e450e657cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:49.896240 kubelet[2745]: E1101 00:24:49.896197 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:24:51.996757 containerd[1624]: time="2025-11-01T00:24:51.996045726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:52.337033 systemd[1]: Started sshd@12-46.62.149.99:22-147.75.109.163:57292.service - OpenSSH per-connection server daemon (147.75.109.163:57292). Nov 1 00:24:52.430097 containerd[1624]: time="2025-11-01T00:24:52.430042668Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:52.431435 containerd[1624]: time="2025-11-01T00:24:52.431379747Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:52.431511 containerd[1624]: time="2025-11-01T00:24:52.431479435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:52.431721 kubelet[2745]: E1101 00:24:52.431672 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:52.432023 kubelet[2745]: E1101 00:24:52.431727 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:52.432023 kubelet[2745]: E1101 00:24:52.431858 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rfdz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-t86s4_calico-apiserver(7a5e4241-2b02-4d05-aee8-621954146083): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:52.432987 kubelet[2745]: E1101 00:24:52.432950 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:24:52.995389 containerd[1624]: time="2025-11-01T00:24:52.995352700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:53.428564 containerd[1624]: time="2025-11-01T00:24:53.428301775Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:53.429863 containerd[1624]: time="2025-11-01T00:24:53.429629125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:53.429863 containerd[1624]: time="2025-11-01T00:24:53.429676755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:53.429985 kubelet[2745]: E1101 00:24:53.429953 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:53.430191 kubelet[2745]: E1101 00:24:53.429999 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:53.432151 kubelet[2745]: E1101 00:24:53.430752 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lrfg9_calico-system(5aad50b7-9c5b-4c75-b82d-9cd68d392290): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:53.432151 kubelet[2745]: E1101 00:24:53.431903 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:24:53.439372 sshd[5913]: Accepted publickey for core from 147.75.109.163 port 57292 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:53.440668 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:53.447115 systemd-logind[1610]: New session 13 of user core. Nov 1 00:24:53.455416 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:24:54.281325 sshd[5913]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:54.286311 systemd[1]: sshd@12-46.62.149.99:22-147.75.109.163:57292.service: Deactivated successfully. Nov 1 00:24:54.293344 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:24:54.294722 systemd-logind[1610]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:24:54.297271 systemd-logind[1610]: Removed session 13. Nov 1 00:24:54.435213 systemd[1]: Started sshd@13-46.62.149.99:22-147.75.109.163:57296.service - OpenSSH per-connection server daemon (147.75.109.163:57296). Nov 1 00:24:55.002688 kubelet[2745]: E1101 00:24:55.002627 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:24:55.448816 sshd[5927]: Accepted publickey for core from 147.75.109.163 port 57296 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:55.449783 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:55.454820 systemd-logind[1610]: New session 14 of user core. Nov 1 00:24:55.459334 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:24:56.501146 sshd[5927]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:56.505353 systemd[1]: sshd@13-46.62.149.99:22-147.75.109.163:57296.service: Deactivated successfully. Nov 1 00:24:56.512602 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:24:56.512802 systemd-logind[1610]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:24:56.515017 systemd-logind[1610]: Removed session 14. Nov 1 00:24:56.702370 systemd[1]: Started sshd@14-46.62.149.99:22-147.75.109.163:57312.service - OpenSSH per-connection server daemon (147.75.109.163:57312). Nov 1 00:24:56.997710 containerd[1624]: time="2025-11-01T00:24:56.997343766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:57.436997 containerd[1624]: time="2025-11-01T00:24:57.436940665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:57.438391 containerd[1624]: time="2025-11-01T00:24:57.438210056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:57.438391 containerd[1624]: time="2025-11-01T00:24:57.438301579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:57.438783 kubelet[2745]: E1101 00:24:57.438532 2745 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:57.438783 kubelet[2745]: E1101 00:24:57.438599 2745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:57.438783 kubelet[2745]: E1101 00:24:57.438725 2745 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bh9z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5cd88c66c7-sqhhf_calico-apiserver(86457ed6-a969-4f17-a69a-681dcab352cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:57.440929 kubelet[2745]: E1101 00:24:57.440874 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:24:57.815166 sshd[5939]: Accepted publickey for core from 147.75.109.163 port 57312 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:24:57.815955 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:57.821187 systemd-logind[1610]: New session 15 of user core. Nov 1 00:24:57.826354 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:24:59.396260 sshd[5939]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:59.406588 systemd[1]: sshd@14-46.62.149.99:22-147.75.109.163:57312.service: Deactivated successfully. Nov 1 00:24:59.412981 systemd-logind[1610]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:24:59.416609 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:24:59.420647 systemd-logind[1610]: Removed session 15. Nov 1 00:24:59.581460 systemd[1]: Started sshd@15-46.62.149.99:22-147.75.109.163:57328.service - OpenSSH per-connection server daemon (147.75.109.163:57328). Nov 1 00:24:59.994472 kubelet[2745]: E1101 00:24:59.994419 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:25:00.693642 sshd[5958]: Accepted publickey for core from 147.75.109.163 port 57328 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:25:00.695336 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:00.701500 systemd-logind[1610]: New session 16 of user core. Nov 1 00:25:00.708490 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:25:01.000185 kubelet[2745]: E1101 00:25:00.997088 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:25:01.751324 sshd[5958]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:01.759403 systemd[1]: sshd@15-46.62.149.99:22-147.75.109.163:57328.service: Deactivated successfully. Nov 1 00:25:01.766401 systemd-logind[1610]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:25:01.768388 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:25:01.774392 systemd-logind[1610]: Removed session 16. Nov 1 00:25:01.898339 systemd[1]: Started sshd@16-46.62.149.99:22-147.75.109.163:50944.service - OpenSSH per-connection server daemon (147.75.109.163:50944). Nov 1 00:25:02.894807 sshd[5970]: Accepted publickey for core from 147.75.109.163 port 50944 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:25:02.901381 sshd[5970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:02.914357 systemd-logind[1610]: New session 17 of user core. Nov 1 00:25:02.921530 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:25:03.715437 sshd[5970]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:03.717989 systemd[1]: sshd@16-46.62.149.99:22-147.75.109.163:50944.service: Deactivated successfully. Nov 1 00:25:03.721458 systemd-logind[1610]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:25:03.723791 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:25:03.724733 systemd-logind[1610]: Removed session 17. Nov 1 00:25:06.998530 kubelet[2745]: E1101 00:25:06.998224 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:25:06.999287 kubelet[2745]: E1101 00:25:06.998948 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:25:07.012957 kubelet[2745]: E1101 00:25:07.012906 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:25:08.920419 systemd[1]: Started sshd@17-46.62.149.99:22-147.75.109.163:50952.service - OpenSSH per-connection server daemon (147.75.109.163:50952). Nov 1 00:25:08.993833 kubelet[2745]: E1101 00:25:08.993777 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:25:10.071200 sshd[6007]: Accepted publickey for core from 147.75.109.163 port 50952 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:25:10.073793 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:10.080725 systemd-logind[1610]: New session 18 of user core. Nov 1 00:25:10.086365 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:25:10.944725 sshd[6007]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:10.950708 systemd[1]: sshd@17-46.62.149.99:22-147.75.109.163:50952.service: Deactivated successfully. Nov 1 00:25:10.952441 systemd-logind[1610]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:25:10.953733 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:25:10.954786 systemd-logind[1610]: Removed session 18. Nov 1 00:25:10.994159 kubelet[2745]: E1101 00:25:10.994100 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:25:12.994985 kubelet[2745]: E1101 00:25:12.994730 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:25:16.105588 systemd[1]: Started sshd@18-46.62.149.99:22-147.75.109.163:50490.service - OpenSSH per-connection server daemon (147.75.109.163:50490). Nov 1 00:25:17.125867 sshd[6044]: Accepted publickey for core from 147.75.109.163 port 50490 ssh2: RSA SHA256:KMkO2BRQK4zvHgtpo4/QlyEdSpVbdU7AAfefKOV9vEE Nov 1 00:25:17.127931 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:17.132230 systemd-logind[1610]: New session 19 of user core. Nov 1 00:25:17.138311 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:25:17.928916 sshd[6044]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:17.932519 systemd[1]: sshd@18-46.62.149.99:22-147.75.109.163:50490.service: Deactivated successfully. Nov 1 00:25:17.941506 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:25:17.943193 systemd-logind[1610]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:25:17.944745 systemd-logind[1610]: Removed session 19. Nov 1 00:25:19.992697 kubelet[2745]: E1101 00:25:19.992638 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:25:20.994286 kubelet[2745]: E1101 00:25:20.994205 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:25:20.995862 kubelet[2745]: E1101 00:25:20.995449 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:25:22.993229 kubelet[2745]: E1101 00:25:22.993148 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:25:24.992626 kubelet[2745]: E1101 00:25:24.992535 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:25:25.992934 kubelet[2745]: E1101 00:25:25.992873 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3" Nov 1 00:25:31.993309 kubelet[2745]: E1101 00:25:31.993228 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lrfg9" podUID="5aad50b7-9c5b-4c75-b82d-9cd68d392290" Nov 1 00:25:32.930189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108-rootfs.mount: Deactivated successfully. Nov 1 00:25:33.002928 kubelet[2745]: E1101 00:25:33.002851 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-t86s4" podUID="7a5e4241-2b02-4d05-aee8-621954146083" Nov 1 00:25:33.009290 containerd[1624]: time="2025-11-01T00:25:32.949043796Z" level=info msg="shim disconnected" id=ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108 namespace=k8s.io Nov 1 00:25:33.023384 containerd[1624]: time="2025-11-01T00:25:33.023286236Z" level=warning msg="cleaning up after shim disconnected" id=ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108 namespace=k8s.io Nov 1 00:25:33.023384 containerd[1624]: time="2025-11-01T00:25:33.023360837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:33.261087 kubelet[2745]: E1101 00:25:33.261036 2745 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57062->10.0.0.2:2379: read: connection timed out" Nov 1 00:25:33.269621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230-rootfs.mount: Deactivated successfully. Nov 1 00:25:33.276076 containerd[1624]: time="2025-11-01T00:25:33.275973581Z" level=info msg="shim disconnected" id=188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230 namespace=k8s.io Nov 1 00:25:33.276076 containerd[1624]: time="2025-11-01T00:25:33.276030178Z" level=warning msg="cleaning up after shim disconnected" id=188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230 namespace=k8s.io Nov 1 00:25:33.276076 containerd[1624]: time="2025-11-01T00:25:33.276039676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:33.427964 kubelet[2745]: I1101 00:25:33.426743 2745 status_manager.go:890] "Failed to get status for pod" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56996->10.0.0.2:2379: read: connection timed out" Nov 1 00:25:33.467889 kubelet[2745]: E1101 00:25:33.429548 2745 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56868->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-5cd88c66c7-sqhhf.1873ba3213cb611e calico-apiserver 1598 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-5cd88c66c7-sqhhf,UID:86457ed6-a969-4f17-a69a-681dcab352cc,APIVersion:v1,ResourceVersion:780,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-b21903d23a,},FirstTimestamp:2025-11-01 00:23:23 +0000 UTC,LastTimestamp:2025-11-01 00:25:22.993070551 +0000 UTC m=+168.131763046,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-b21903d23a,}" Nov 1 00:25:33.865975 kubelet[2745]: I1101 00:25:33.865837 2745 scope.go:117] "RemoveContainer" containerID="188d27287d28bd27fa28d62cf515f7266863b04237a54d49f59dac908766e230" Nov 1 00:25:33.865975 kubelet[2745]: I1101 00:25:33.865902 2745 scope.go:117] "RemoveContainer" containerID="ef743b3c56b6f12bc4d772fc2fc0bf74a123d022aba9777cf5de0cbb698c9108" Nov 1 00:25:33.883395 containerd[1624]: time="2025-11-01T00:25:33.883337311Z" level=info msg="CreateContainer within sandbox \"ec69da99126e063038292f461f9fa6e46d152408ca5543d97d36ea935fb9e715\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 1 00:25:33.885739 containerd[1624]: time="2025-11-01T00:25:33.884827365Z" level=info msg="CreateContainer within sandbox \"a3d9f9611f29da0db158a72170134c00992d7f345d7a9d8d4063a7fec3d149e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 00:25:33.955264 containerd[1624]: time="2025-11-01T00:25:33.955224647Z" level=info msg="CreateContainer within sandbox \"a3d9f9611f29da0db158a72170134c00992d7f345d7a9d8d4063a7fec3d149e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ed0e1091651035dd504e2296212bf54f315a03a3b4250bced7799798a583d922\"" Nov 1 00:25:33.955939 containerd[1624]: time="2025-11-01T00:25:33.955915702Z" level=info msg="StartContainer for \"ed0e1091651035dd504e2296212bf54f315a03a3b4250bced7799798a583d922\"" Nov 1 00:25:33.958364 containerd[1624]: time="2025-11-01T00:25:33.958284836Z" level=info msg="CreateContainer within sandbox \"ec69da99126e063038292f461f9fa6e46d152408ca5543d97d36ea935fb9e715\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7051da564a43f955fee7b01f576489d236d6c82ab5505ca9a78b2469633668ea\"" Nov 1 00:25:33.958847 containerd[1624]: time="2025-11-01T00:25:33.958829645Z" level=info msg="StartContainer for \"7051da564a43f955fee7b01f576489d236d6c82ab5505ca9a78b2469633668ea\"" Nov 1 00:25:34.056772 containerd[1624]: time="2025-11-01T00:25:34.056079398Z" level=info msg="shim disconnected" id=3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359 namespace=k8s.io Nov 1 00:25:34.056772 containerd[1624]: time="2025-11-01T00:25:34.056185178Z" level=warning msg="cleaning up after shim disconnected" id=3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359 namespace=k8s.io Nov 1 00:25:34.056772 containerd[1624]: time="2025-11-01T00:25:34.056195036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:25:34.096116 containerd[1624]: time="2025-11-01T00:25:34.096002382Z" level=info msg="StartContainer for \"ed0e1091651035dd504e2296212bf54f315a03a3b4250bced7799798a583d922\" returns successfully" Nov 1 00:25:34.122049 containerd[1624]: time="2025-11-01T00:25:34.121361683Z" level=info msg="StartContainer for \"7051da564a43f955fee7b01f576489d236d6c82ab5505ca9a78b2469633668ea\" returns successfully" Nov 1 00:25:34.876932 kubelet[2745]: I1101 00:25:34.876893 2745 scope.go:117] "RemoveContainer" containerID="3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359" Nov 1 00:25:34.879512 containerd[1624]: time="2025-11-01T00:25:34.879462071Z" level=info msg="CreateContainer within sandbox \"86cd091752e2c3b8e4fb313dde1cea9709057643fabd7f5fe0ebe7ed10d836c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:25:34.894873 containerd[1624]: time="2025-11-01T00:25:34.894794978Z" level=info msg="CreateContainer within sandbox \"86cd091752e2c3b8e4fb313dde1cea9709057643fabd7f5fe0ebe7ed10d836c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f06939cd683de63357068be9393491f167629be4c434df80a8b68ceebe1c0714\"" Nov 1 00:25:34.895417 containerd[1624]: time="2025-11-01T00:25:34.895373249Z" level=info msg="StartContainer for \"f06939cd683de63357068be9393491f167629be4c434df80a8b68ceebe1c0714\"" Nov 1 00:25:34.932470 systemd[1]: run-containerd-runc-k8s.io-ed0e1091651035dd504e2296212bf54f315a03a3b4250bced7799798a583d922-runc.sT96Ei.mount: Deactivated successfully. Nov 1 00:25:34.932650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a8aaef7ecd9d5fcb7040be82fd33f4a9a4cec3ffadd850d6bd785130c7b2359-rootfs.mount: Deactivated successfully. Nov 1 00:25:34.969546 containerd[1624]: time="2025-11-01T00:25:34.969495369Z" level=info msg="StartContainer for \"f06939cd683de63357068be9393491f167629be4c434df80a8b68ceebe1c0714\" returns successfully" Nov 1 00:25:35.009052 kubelet[2745]: E1101 00:25:35.008956 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jnx62" podUID="2fb3e683-810b-4091-a4c8-6fa869de6607" Nov 1 00:25:35.992276 kubelet[2745]: E1101 00:25:35.992225 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5cd88c66c7-sqhhf" podUID="86457ed6-a969-4f17-a69a-681dcab352cc" Nov 1 00:25:36.992916 kubelet[2745]: E1101 00:25:36.992822 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85dfcd4bbd-qbgm9" podUID="e33febfb-cf29-450e-a371-4a2c6d265345" Nov 1 00:25:38.000007 kubelet[2745]: E1101 00:25:37.999888 2745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5bd87784b4-tjjnp" podUID="ceed905b-f8f5-47a1-9eef-2e450e657cf3"