Nov 8 00:38:25.047880 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:38:25.047916 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:38:25.047931 kernel: BIOS-provided physical RAM map: Nov 8 00:38:25.047947 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 00:38:25.047957 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 00:38:25.047967 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:38:25.047979 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 8 00:38:25.047989 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 8 00:38:25.048000 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:38:25.048010 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:38:25.048021 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:38:25.048031 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:38:25.048047 kernel: NX (Execute Disable) protection: active Nov 8 00:38:25.048057 kernel: APIC: Static calls initialized Nov 8 00:38:25.048070 kernel: SMBIOS 2.8 present. Nov 8 00:38:25.048082 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 8 00:38:25.048093 kernel: Hypervisor detected: KVM Nov 8 00:38:25.048109 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:38:25.048121 kernel: kvm-clock: using sched offset of 4409395663 cycles Nov 8 00:38:25.048133 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:38:25.048145 kernel: tsc: Detected 2499.998 MHz processor Nov 8 00:38:25.048156 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:38:25.048168 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:38:25.048180 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 8 00:38:25.048191 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:38:25.048203 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:38:25.048224 kernel: Using GB pages for direct mapping Nov 8 00:38:25.048236 kernel: ACPI: Early table checksum verification disabled Nov 8 00:38:25.048247 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 8 00:38:25.048259 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048279 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048290 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048301 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 8 00:38:25.048313 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048324 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048348 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048360 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:38:25.048371 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 8 00:38:25.048383 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 8 00:38:25.048394 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 8 00:38:25.048412 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 8 00:38:25.048424 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 8 00:38:25.048440 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 8 00:38:25.048453 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 8 00:38:25.048465 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:38:25.048477 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:38:25.048488 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 00:38:25.048500 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 8 00:38:25.048512 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 00:38:25.048528 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 8 00:38:25.048540 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 00:38:25.048552 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 8 00:38:25.048564 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 00:38:25.048576 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 8 00:38:25.048599 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 00:38:25.048612 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 8 00:38:25.048624 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 00:38:25.048636 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 8 00:38:25.048648 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 00:38:25.048665 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 8 00:38:25.048677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 00:38:25.048716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 00:38:25.048730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 8 00:38:25.048742 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 8 00:38:25.048755 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 8 00:38:25.048767 kernel: Zone ranges: Nov 8 00:38:25.048780 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:38:25.048792 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 8 00:38:25.048810 kernel: Normal empty Nov 8 00:38:25.048822 kernel: Movable zone start for each node Nov 8 00:38:25.048834 kernel: Early memory node ranges Nov 8 00:38:25.048846 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:38:25.048858 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 8 00:38:25.048870 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 8 00:38:25.048882 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:38:25.048894 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:38:25.048906 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 8 00:38:25.048918 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:38:25.048935 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:38:25.048947 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:38:25.048959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:38:25.048971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:38:25.048984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:38:25.048996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:38:25.049008 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:38:25.049020 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:38:25.049032 kernel: TSC deadline timer available Nov 8 00:38:25.049048 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 8 00:38:25.049060 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:38:25.049072 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:38:25.049084 kernel: Booting paravirtualized kernel on KVM Nov 8 00:38:25.049096 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:38:25.049109 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 00:38:25.049121 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 00:38:25.049133 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 00:38:25.049145 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 00:38:25.049161 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:38:25.049174 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:38:25.049187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:38:25.049200 kernel: random: crng init done Nov 8 00:38:25.049219 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:38:25.049232 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:38:25.049244 kernel: Fallback order for Node 0: 0 Nov 8 00:38:25.049256 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 8 00:38:25.049279 kernel: Policy zone: DMA32 Nov 8 00:38:25.049291 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:38:25.049304 kernel: software IO TLB: area num 16. Nov 8 00:38:25.049316 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 194820K reserved, 0K cma-reserved) Nov 8 00:38:25.049328 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 00:38:25.049341 kernel: Kernel/User page tables isolation: enabled Nov 8 00:38:25.049353 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:38:25.049365 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:38:25.049377 kernel: Dynamic Preempt: voluntary Nov 8 00:38:25.049394 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:38:25.049407 kernel: rcu: RCU event tracing is enabled. Nov 8 00:38:25.049419 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 00:38:25.049431 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:38:25.049444 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:38:25.049469 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:38:25.049483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:38:25.049495 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 00:38:25.049508 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 8 00:38:25.049521 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:38:25.049533 kernel: Console: colour VGA+ 80x25 Nov 8 00:38:25.049546 kernel: printk: console [tty0] enabled Nov 8 00:38:25.049563 kernel: printk: console [ttyS0] enabled Nov 8 00:38:25.049579 kernel: ACPI: Core revision 20230628 Nov 8 00:38:25.049602 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:38:25.049616 kernel: x2apic enabled Nov 8 00:38:25.049628 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:38:25.049647 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:38:25.049660 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 00:38:25.049673 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:38:25.051709 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 8 00:38:25.051730 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 8 00:38:25.051743 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:38:25.051756 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:38:25.051768 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:38:25.051781 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:38:25.051794 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:38:25.051813 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:38:25.051826 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 00:38:25.051839 kernel: MMIO Stale Data: Unknown: No mitigations Nov 8 00:38:25.051851 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 00:38:25.051863 kernel: active return thunk: its_return_thunk Nov 8 00:38:25.051876 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:38:25.051889 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:38:25.051902 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:38:25.051914 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:38:25.051927 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:38:25.051939 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 00:38:25.051957 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:38:25.051969 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:38:25.051982 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:38:25.051994 kernel: landlock: Up and running. Nov 8 00:38:25.052007 kernel: SELinux: Initializing. Nov 8 00:38:25.052019 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:38:25.052032 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 00:38:25.052045 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 8 00:38:25.052058 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:38:25.052071 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:38:25.052089 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 00:38:25.052102 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 8 00:38:25.052115 kernel: signal: max sigframe size: 1776 Nov 8 00:38:25.052127 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:38:25.052141 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:38:25.052153 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:38:25.052166 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:38:25.052179 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:38:25.052192 kernel: .... node #0, CPUs: #1 Nov 8 00:38:25.052209 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 00:38:25.052222 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:38:25.052234 kernel: smpboot: Max logical packages: 16 Nov 8 00:38:25.052247 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 00:38:25.052260 kernel: devtmpfs: initialized Nov 8 00:38:25.052273 kernel: x86/mm: Memory block size: 128MB Nov 8 00:38:25.052286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:38:25.052298 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 00:38:25.052311 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:38:25.052324 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:38:25.052341 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:38:25.052354 kernel: audit: type=2000 audit(1762562303.621:1): state=initialized audit_enabled=0 res=1 Nov 8 00:38:25.052366 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:38:25.052379 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:38:25.052392 kernel: cpuidle: using governor menu Nov 8 00:38:25.052404 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:38:25.052417 kernel: dca service started, version 1.12.1 Nov 8 00:38:25.052430 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:38:25.052447 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:38:25.052460 kernel: PCI: Using configuration type 1 for base access Nov 8 00:38:25.052473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:38:25.052497 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:38:25.052510 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:38:25.052522 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:38:25.052534 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:38:25.052546 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:38:25.052559 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:38:25.052575 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:38:25.052611 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:38:25.052624 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:38:25.052637 kernel: ACPI: Interpreter enabled Nov 8 00:38:25.052650 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:38:25.052662 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:38:25.052675 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:38:25.056354 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:38:25.056373 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:38:25.056394 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:38:25.056757 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:38:25.056956 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:38:25.057143 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:38:25.057162 kernel: PCI host bridge to bus 0000:00 Nov 8 00:38:25.057366 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:38:25.057534 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:38:25.057748 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:38:25.057914 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:38:25.058079 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:38:25.058235 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 8 00:38:25.058393 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:38:25.058619 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:38:25.058876 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 8 00:38:25.059071 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 8 00:38:25.059260 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 8 00:38:25.059432 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 8 00:38:25.059630 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:38:25.062213 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.062402 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 8 00:38:25.062632 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.064874 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 8 00:38:25.065087 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.065271 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 8 00:38:25.065473 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.065660 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 8 00:38:25.065917 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.066094 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 8 00:38:25.066286 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.066468 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 8 00:38:25.066672 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.066865 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 8 00:38:25.067085 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:38:25.067257 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 8 00:38:25.067470 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:38:25.067666 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 00:38:25.069949 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 8 00:38:25.070130 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 8 00:38:25.070310 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 8 00:38:25.070529 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 8 00:38:25.071791 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 00:38:25.071975 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 8 00:38:25.072149 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 8 00:38:25.072375 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:38:25.072553 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:38:25.073858 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:38:25.074049 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 8 00:38:25.074249 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 8 00:38:25.074479 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:38:25.074666 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:38:25.074891 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 8 00:38:25.075090 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 8 00:38:25.075294 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 8 00:38:25.075490 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 8 00:38:25.077752 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:38:25.078002 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 00:38:25.078227 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 8 00:38:25.078434 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 8 00:38:25.078640 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 8 00:38:25.080863 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:38:25.081081 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:38:25.081264 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 8 00:38:25.081439 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 8 00:38:25.081659 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:38:25.081858 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:38:25.082067 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:38:25.082257 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 8 00:38:25.082432 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 8 00:38:25.082619 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:38:25.082817 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:38:25.082993 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 8 00:38:25.083161 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:38:25.083341 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:38:25.083536 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 8 00:38:25.085780 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:38:25.085976 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:38:25.086167 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 8 00:38:25.086367 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:38:25.086540 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:38:25.088764 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 8 00:38:25.088946 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:38:25.089130 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:38:25.089306 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 8 00:38:25.089466 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:38:25.089657 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:38:25.089678 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:38:25.089715 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:38:25.089729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:38:25.089742 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:38:25.089755 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:38:25.089776 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:38:25.089789 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:38:25.089802 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:38:25.089814 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:38:25.089827 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:38:25.089840 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:38:25.089853 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:38:25.089866 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:38:25.089879 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:38:25.089896 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:38:25.089909 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:38:25.089922 kernel: iommu: Default domain type: Translated Nov 8 00:38:25.089935 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:38:25.089948 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:38:25.089961 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:38:25.089974 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 00:38:25.089987 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 8 00:38:25.090194 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:38:25.090379 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:38:25.090552 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:38:25.090572 kernel: vgaarb: loaded Nov 8 00:38:25.090602 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:38:25.090616 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:38:25.090629 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:38:25.090642 kernel: pnp: PnP ACPI init Nov 8 00:38:25.090864 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:38:25.090896 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:38:25.090910 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:38:25.090923 kernel: NET: Registered PF_INET protocol family Nov 8 00:38:25.090937 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:38:25.090958 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 00:38:25.090971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:38:25.090984 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:38:25.090997 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 00:38:25.091022 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 00:38:25.091035 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:38:25.091048 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 00:38:25.091061 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:38:25.091083 kernel: NET: Registered PF_XDP protocol family Nov 8 00:38:25.091249 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 8 00:38:25.091424 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 8 00:38:25.091603 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 8 00:38:25.093832 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 8 00:38:25.094012 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:38:25.094184 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:38:25.094355 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:38:25.094525 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:38:25.100218 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:38:25.100411 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:38:25.100595 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:38:25.100815 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 8 00:38:25.101011 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 8 00:38:25.101214 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 8 00:38:25.101401 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 8 00:38:25.101581 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 8 00:38:25.101795 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 8 00:38:25.102006 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 00:38:25.102177 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 8 00:38:25.102348 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 8 00:38:25.102522 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 8 00:38:25.102743 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:38:25.102934 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 8 00:38:25.103128 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 8 00:38:25.103318 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 00:38:25.103510 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:38:25.103747 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 8 00:38:25.103930 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 8 00:38:25.104111 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 00:38:25.104302 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:38:25.104499 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 8 00:38:25.104738 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 8 00:38:25.104925 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 00:38:25.105111 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:38:25.105305 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 8 00:38:25.105496 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 8 00:38:25.106765 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 00:38:25.106948 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:38:25.107120 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 8 00:38:25.107290 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 8 00:38:25.107480 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 00:38:25.107676 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:38:25.111209 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 8 00:38:25.111402 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 8 00:38:25.111575 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 00:38:25.111795 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:38:25.111967 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 8 00:38:25.112160 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 8 00:38:25.112348 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 00:38:25.112519 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:38:25.112726 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:38:25.112897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:38:25.113040 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:38:25.113181 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:38:25.113345 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:38:25.113522 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 8 00:38:25.113748 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 8 00:38:25.113925 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 8 00:38:25.114100 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 00:38:25.114290 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 8 00:38:25.114449 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 8 00:38:25.114654 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 8 00:38:25.121363 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 00:38:25.121544 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 8 00:38:25.121764 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 8 00:38:25.121941 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 00:38:25.122138 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 8 00:38:25.122310 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 8 00:38:25.122472 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 00:38:25.122666 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 8 00:38:25.122861 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 8 00:38:25.123024 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 00:38:25.123214 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 8 00:38:25.123386 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 8 00:38:25.123568 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 00:38:25.123803 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 8 00:38:25.123968 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 8 00:38:25.124130 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 00:38:25.124316 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 8 00:38:25.124477 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 8 00:38:25.124651 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 00:38:25.124680 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:38:25.124730 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:38:25.124745 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:38:25.124759 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 8 00:38:25.124773 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:38:25.124787 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 00:38:25.124801 kernel: Initialise system trusted keyrings Nov 8 00:38:25.124815 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 00:38:25.124829 kernel: Key type asymmetric registered Nov 8 00:38:25.124861 kernel: Asymmetric key parser 'x509' registered Nov 8 00:38:25.124874 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:38:25.124888 kernel: io scheduler mq-deadline registered Nov 8 00:38:25.124901 kernel: io scheduler kyber registered Nov 8 00:38:25.124914 kernel: io scheduler bfq registered Nov 8 00:38:25.125112 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 8 00:38:25.125309 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 8 00:38:25.125481 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.125674 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 8 00:38:25.125864 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 8 00:38:25.126035 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.126205 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 8 00:38:25.126398 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 8 00:38:25.130578 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.130801 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 8 00:38:25.130976 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 8 00:38:25.131148 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.131323 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 8 00:38:25.131497 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 8 00:38:25.131696 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.131881 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 8 00:38:25.132055 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 8 00:38:25.132231 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.132409 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 8 00:38:25.132600 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 8 00:38:25.132808 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.132991 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 8 00:38:25.133165 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 8 00:38:25.133338 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:38:25.133360 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:38:25.133375 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:38:25.133389 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 00:38:25.133402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:38:25.133426 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:38:25.133440 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:38:25.133454 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:38:25.133468 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:38:25.133481 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:38:25.133670 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:38:25.134886 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:38:25.135057 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:38:24 UTC (1762562304) Nov 8 00:38:25.135227 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 8 00:38:25.135247 kernel: intel_pstate: CPU model not supported Nov 8 00:38:25.135261 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:38:25.135275 kernel: Segment Routing with IPv6 Nov 8 00:38:25.135289 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:38:25.135302 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:38:25.135316 kernel: Key type dns_resolver registered Nov 8 00:38:25.135330 kernel: IPI shorthand broadcast: enabled Nov 8 00:38:25.135343 kernel: sched_clock: Marking stable (1264003688, 236573052)->(1634570129, -133993389) Nov 8 00:38:25.135364 kernel: registered taskstats version 1 Nov 8 00:38:25.135378 kernel: Loading compiled-in X.509 certificates Nov 8 00:38:25.135391 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:38:25.135405 kernel: Key type .fscrypt registered Nov 8 00:38:25.135418 kernel: Key type fscrypt-provisioning registered Nov 8 00:38:25.135432 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:38:25.135445 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:38:25.135458 kernel: ima: No architecture policies found Nov 8 00:38:25.135472 kernel: clk: Disabling unused clocks Nov 8 00:38:25.135490 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:38:25.135504 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:38:25.135518 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:38:25.135531 kernel: Run /init as init process Nov 8 00:38:25.135545 kernel: with arguments: Nov 8 00:38:25.135559 kernel: /init Nov 8 00:38:25.135572 kernel: with environment: Nov 8 00:38:25.135596 kernel: HOME=/ Nov 8 00:38:25.135610 kernel: TERM=linux Nov 8 00:38:25.135633 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:38:25.135650 systemd[1]: Detected virtualization kvm. Nov 8 00:38:25.135665 systemd[1]: Detected architecture x86-64. Nov 8 00:38:25.135678 systemd[1]: Running in initrd. Nov 8 00:38:25.136780 systemd[1]: No hostname configured, using default hostname. Nov 8 00:38:25.136804 systemd[1]: Hostname set to . Nov 8 00:38:25.136820 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:38:25.136842 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:38:25.136858 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:38:25.136873 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:38:25.136888 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:38:25.136903 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:38:25.136918 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:38:25.136933 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:38:25.136954 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:38:25.136969 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:38:25.136984 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:38:25.136998 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:38:25.137013 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:38:25.137027 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:38:25.137042 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:38:25.137056 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:38:25.137075 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:38:25.137090 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:38:25.137109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:38:25.137124 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:38:25.137139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:38:25.137154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:38:25.137168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:38:25.137183 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:38:25.137197 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:38:25.137217 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:38:25.137232 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:38:25.137246 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:38:25.137261 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:38:25.137275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:38:25.137290 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:38:25.137305 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:38:25.137332 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:38:25.137351 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:38:25.137402 systemd-journald[201]: Collecting audit messages is disabled. Nov 8 00:38:25.137452 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:38:25.137467 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:38:25.137494 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:38:25.137508 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:38:25.137522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:38:25.137537 systemd-journald[201]: Journal started Nov 8 00:38:25.137580 systemd-journald[201]: Runtime Journal (/run/log/journal/d8990a7f7b424094aa85abb9b676a3cf) is 4.7M, max 38.0M, 33.2M free. Nov 8 00:38:25.068174 systemd-modules-load[202]: Inserted module 'overlay' Nov 8 00:38:25.140389 kernel: Bridge firewalling registered Nov 8 00:38:25.142774 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 8 00:38:25.147719 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:38:25.159719 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:38:25.161389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:38:25.164273 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:38:25.167251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:38:25.174871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:38:25.176888 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:38:25.179866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:38:25.206275 dracut-cmdline[227]: dracut-dracut-053 Nov 8 00:38:25.204812 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:38:25.207660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:38:25.212159 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:38:25.221972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:38:25.271751 systemd-resolved[250]: Positive Trust Anchors: Nov 8 00:38:25.271768 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:38:25.271813 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:38:25.281568 systemd-resolved[250]: Defaulting to hostname 'linux'. Nov 8 00:38:25.284619 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:38:25.286051 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:38:25.340752 kernel: SCSI subsystem initialized Nov 8 00:38:25.353704 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:38:25.367725 kernel: iscsi: registered transport (tcp) Nov 8 00:38:25.394063 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:38:25.394128 kernel: QLogic iSCSI HBA Driver Nov 8 00:38:25.453830 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:38:25.461904 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:38:25.493931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:38:25.493989 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:38:25.495734 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:38:25.546740 kernel: raid6: sse2x4 gen() 13284 MB/s Nov 8 00:38:25.564753 kernel: raid6: sse2x2 gen() 9010 MB/s Nov 8 00:38:25.586303 kernel: raid6: sse2x1 gen() 9716 MB/s Nov 8 00:38:25.586622 kernel: raid6: using algorithm sse2x4 gen() 13284 MB/s Nov 8 00:38:25.605377 kernel: raid6: .... xor() 7497 MB/s, rmw enabled Nov 8 00:38:25.605436 kernel: raid6: using ssse3x2 recovery algorithm Nov 8 00:38:25.632725 kernel: xor: automatically using best checksumming function avx Nov 8 00:38:25.832744 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:38:25.849521 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:38:25.857035 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:38:25.881379 systemd-udevd[419]: Using default interface naming scheme 'v255'. Nov 8 00:38:25.888906 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:38:25.896870 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:38:25.920297 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Nov 8 00:38:25.961511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:38:25.974967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:38:26.088050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:38:26.097090 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:38:26.124833 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:38:26.127753 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:38:26.128568 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:38:26.131069 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:38:26.139891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:38:26.166760 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:38:26.215732 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 8 00:38:26.234563 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 8 00:38:26.234869 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:38:26.242359 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:38:26.242399 kernel: GPT:17805311 != 125829119 Nov 8 00:38:26.242429 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:38:26.244945 kernel: GPT:17805311 != 125829119 Nov 8 00:38:26.244977 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:38:26.246769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:38:26.264899 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:38:26.265101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:38:26.267498 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:38:26.273063 kernel: ACPI: bus type USB registered Nov 8 00:38:26.273092 kernel: usbcore: registered new interface driver usbfs Nov 8 00:38:26.269102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:38:26.269293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:38:26.274135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:38:26.284974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:38:26.299721 kernel: usbcore: registered new interface driver hub Nov 8 00:38:26.305711 kernel: usbcore: registered new device driver usb Nov 8 00:38:26.327182 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Nov 8 00:38:26.337718 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:38:26.452359 kernel: AVX version of gcm_enc/dec engaged. Nov 8 00:38:26.452409 kernel: AES CTR mode by8 optimization enabled Nov 8 00:38:26.452440 kernel: libata version 3.00 loaded. Nov 8 00:38:26.452459 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (476) Nov 8 00:38:26.452477 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:38:26.453964 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:38:26.453999 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:38:26.454223 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:38:26.454425 kernel: scsi host0: ahci Nov 8 00:38:26.454656 kernel: scsi host1: ahci Nov 8 00:38:26.454898 kernel: scsi host2: ahci Nov 8 00:38:26.455103 kernel: scsi host3: ahci Nov 8 00:38:26.455312 kernel: scsi host4: ahci Nov 8 00:38:26.455507 kernel: scsi host5: ahci Nov 8 00:38:26.457064 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Nov 8 00:38:26.457089 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Nov 8 00:38:26.457115 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Nov 8 00:38:26.457134 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Nov 8 00:38:26.457152 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Nov 8 00:38:26.457170 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Nov 8 00:38:26.460886 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:38:26.462108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:38:26.481068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:38:26.487092 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:38:26.487982 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 00:38:26.500913 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:38:26.505865 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:38:26.514707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:38:26.514918 disk-uuid[557]: Primary Header is updated. Nov 8 00:38:26.514918 disk-uuid[557]: Secondary Entries is updated. Nov 8 00:38:26.514918 disk-uuid[557]: Secondary Header is updated. Nov 8 00:38:26.543588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:38:26.677260 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.677321 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.685803 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.685845 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.686731 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.689166 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:38:26.709469 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 8 00:38:26.709805 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:38:26.713746 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:38:26.717340 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 8 00:38:26.717634 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:38:26.718816 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:38:26.722997 kernel: hub 1-0:1.0: USB hub found Nov 8 00:38:26.723309 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:38:26.723525 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:38:26.728316 kernel: hub 2-0:1.0: USB hub found Nov 8 00:38:26.728614 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:38:26.966866 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:38:27.108715 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:38:27.115045 kernel: usbcore: registered new interface driver usbhid Nov 8 00:38:27.115089 kernel: usbhid: USB HID core driver Nov 8 00:38:27.122301 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 8 00:38:27.122342 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 8 00:38:27.531737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:38:27.533731 disk-uuid[559]: The operation has completed successfully. Nov 8 00:38:27.589342 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:38:27.589549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:38:27.607873 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:38:27.614067 sh[586]: Success Nov 8 00:38:27.631723 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 8 00:38:27.693923 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:38:27.718818 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:38:27.721406 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:38:27.753884 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:38:27.753956 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:38:27.753976 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:38:27.756403 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:38:27.759703 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:38:27.769051 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:38:27.770485 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:38:27.782908 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:38:27.786442 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:38:27.801987 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:38:27.802053 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:38:27.802080 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:38:27.809715 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:38:27.825007 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:38:27.826753 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:38:27.836134 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:38:27.845969 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:38:27.986334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:38:27.990476 ignition[671]: Ignition 2.19.0 Nov 8 00:38:27.991568 ignition[671]: Stage: fetch-offline Nov 8 00:38:27.991661 ignition[671]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:27.991700 ignition[671]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:27.991938 ignition[671]: parsed url from cmdline: "" Nov 8 00:38:27.991945 ignition[671]: no config URL provided Nov 8 00:38:27.991955 ignition[671]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:38:27.996925 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:38:27.991970 ignition[671]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:38:27.999744 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:38:27.991979 ignition[671]: failed to fetch config: resource requires networking Nov 8 00:38:27.992647 ignition[671]: Ignition finished successfully Nov 8 00:38:28.035998 systemd-networkd[772]: lo: Link UP Nov 8 00:38:28.036016 systemd-networkd[772]: lo: Gained carrier Nov 8 00:38:28.038556 systemd-networkd[772]: Enumeration completed Nov 8 00:38:28.039190 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:38:28.039195 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:38:28.040875 systemd-networkd[772]: eth0: Link UP Nov 8 00:38:28.040881 systemd-networkd[772]: eth0: Gained carrier Nov 8 00:38:28.040892 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:38:28.040993 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:38:28.042887 systemd[1]: Reached target network.target - Network. Nov 8 00:38:28.052942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:38:28.070933 ignition[775]: Ignition 2.19.0 Nov 8 00:38:28.070959 ignition[775]: Stage: fetch Nov 8 00:38:28.071797 systemd-networkd[772]: eth0: DHCPv4 address 10.230.37.190/30, gateway 10.230.37.189 acquired from 10.230.37.189 Nov 8 00:38:28.071192 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:28.071213 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:28.071342 ignition[775]: parsed url from cmdline: "" Nov 8 00:38:28.071349 ignition[775]: no config URL provided Nov 8 00:38:28.071359 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:38:28.071388 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:38:28.071644 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 8 00:38:28.071732 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 8 00:38:28.071781 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 8 00:38:28.072186 ignition[775]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:38:28.272403 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Nov 8 00:38:28.292191 ignition[775]: GET result: OK Nov 8 00:38:28.293156 ignition[775]: parsing config with SHA512: eb3ec529764d8ebb569230d9fe217d4a16fca87b7c63aebf403c598113ef4e8e3efe830fc4c0c256a08d4ff86d6c9a10fea0deb6c1ad0d8df8df2a19f3738a9f Nov 8 00:38:28.299704 unknown[775]: fetched base config from "system" Nov 8 00:38:28.299721 unknown[775]: fetched base config from "system" Nov 8 00:38:28.300350 ignition[775]: fetch: fetch complete Nov 8 00:38:28.299731 unknown[775]: fetched user config from "openstack" Nov 8 00:38:28.300370 ignition[775]: fetch: fetch passed Nov 8 00:38:28.302352 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:38:28.300433 ignition[775]: Ignition finished successfully Nov 8 00:38:28.314649 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:38:28.329022 ignition[782]: Ignition 2.19.0 Nov 8 00:38:28.329042 ignition[782]: Stage: kargs Nov 8 00:38:28.329257 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:28.331977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:38:28.329277 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:28.330415 ignition[782]: kargs: kargs passed Nov 8 00:38:28.330484 ignition[782]: Ignition finished successfully Nov 8 00:38:28.347447 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:38:28.365579 ignition[789]: Ignition 2.19.0 Nov 8 00:38:28.365604 ignition[789]: Stage: disks Nov 8 00:38:28.367825 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:28.367866 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:28.369168 ignition[789]: disks: disks passed Nov 8 00:38:28.369260 ignition[789]: Ignition finished successfully Nov 8 00:38:28.372064 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:38:28.373998 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:38:28.375112 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:38:28.376793 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:38:28.378453 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:38:28.380264 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:38:28.396912 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:38:28.415933 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:38:28.419945 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:38:28.427829 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:38:28.547793 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:38:28.549289 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:38:28.550741 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:38:28.560827 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:38:28.563843 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:38:28.565435 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:38:28.567946 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 8 00:38:28.569106 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:38:28.583644 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Nov 8 00:38:28.583677 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:38:28.583719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:38:28.583739 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:38:28.569144 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:38:28.588129 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:38:28.589859 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:38:28.591505 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:38:28.603326 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:38:28.675680 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:38:28.686747 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:38:28.696412 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:38:28.708138 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:38:28.818669 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:38:28.824820 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:38:28.827861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:38:28.840459 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:38:28.842719 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:38:28.867866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:38:28.881758 ignition[926]: INFO : Ignition 2.19.0 Nov 8 00:38:28.881758 ignition[926]: INFO : Stage: mount Nov 8 00:38:28.883656 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:28.883656 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:28.886324 ignition[926]: INFO : mount: mount passed Nov 8 00:38:28.886324 ignition[926]: INFO : Ignition finished successfully Nov 8 00:38:28.887107 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:38:29.954976 systemd-networkd[772]: eth0: Gained IPv6LL Nov 8 00:38:31.462332 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:896f:24:19ff:fee6:25be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:896f:24:19ff:fee6:25be/64 assigned by NDisc. Nov 8 00:38:31.462349 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 8 00:38:35.752274 coreos-metadata[808]: Nov 08 00:38:35.752 WARN failed to locate config-drive, using the metadata service API instead Nov 8 00:38:35.776982 coreos-metadata[808]: Nov 08 00:38:35.776 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 8 00:38:35.797146 coreos-metadata[808]: Nov 08 00:38:35.797 INFO Fetch successful Nov 8 00:38:35.798109 coreos-metadata[808]: Nov 08 00:38:35.797 INFO wrote hostname srv-77jcb.gb1.brightbox.com to /sysroot/etc/hostname Nov 8 00:38:35.801088 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 8 00:38:35.801290 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 8 00:38:35.808864 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:38:35.829031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:38:35.860822 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Nov 8 00:38:35.868990 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:38:35.869041 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:38:35.869062 kernel: BTRFS info (device vda6): using free space tree Nov 8 00:38:35.873702 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 00:38:35.877120 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:38:35.909654 ignition[959]: INFO : Ignition 2.19.0 Nov 8 00:38:35.909654 ignition[959]: INFO : Stage: files Nov 8 00:38:35.911637 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:35.911637 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:35.911637 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:38:35.914604 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:38:35.914604 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:38:35.916763 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:38:35.917848 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:38:35.917848 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:38:35.917396 unknown[959]: wrote ssh authorized keys file for user: core Nov 8 00:38:35.921276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:38:35.921276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 8 00:38:35.921276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:38:35.921276 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 00:38:36.119811 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:38:36.367413 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:38:36.369198 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:38:36.383322 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 00:38:36.657781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:38:37.857981 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 8 00:38:37.861229 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:38:37.869510 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:38:37.869510 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 8 00:38:37.869510 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:38:37.869510 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:38:37.869510 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:38:37.869510 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:38:37.869510 ignition[959]: INFO : files: files passed Nov 8 00:38:37.869510 ignition[959]: INFO : Ignition finished successfully Nov 8 00:38:37.863895 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:38:37.876843 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:38:37.879165 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:38:37.884516 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:38:37.884692 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:38:37.902652 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:38:37.902652 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:38:37.905842 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:38:37.908137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:38:37.910000 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:38:37.923402 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:38:37.956557 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:38:37.956773 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:38:37.958650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:38:37.960222 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:38:37.961893 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:38:37.967990 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:38:37.989481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:38:37.997913 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:38:38.012520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:38:38.014670 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:38:38.016562 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:38:38.017441 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:38:38.017656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:38:38.019814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:38:38.020810 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:38:38.022384 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:38:38.023858 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:38:38.025431 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:38:38.027172 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:38:38.028855 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:38:38.030628 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:38:38.032389 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:38:38.033994 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:38:38.035410 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:38:38.035629 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:38:38.037497 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:38:38.038543 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:38:38.040026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:38:38.040209 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:38:38.041709 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:38:38.041884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:38:38.044054 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:38:38.044238 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:38:38.045379 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:38:38.045610 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:38:38.053018 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:38:38.053778 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:38:38.054058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:38:38.057983 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:38:38.066415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:38:38.066635 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:38:38.067637 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:38:38.068368 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:38:38.079667 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:38:38.079880 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:38:38.089729 ignition[1012]: INFO : Ignition 2.19.0 Nov 8 00:38:38.092345 ignition[1012]: INFO : Stage: umount Nov 8 00:38:38.092345 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:38:38.092345 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 00:38:38.092345 ignition[1012]: INFO : umount: umount passed Nov 8 00:38:38.092345 ignition[1012]: INFO : Ignition finished successfully Nov 8 00:38:38.094512 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:38:38.096802 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:38:38.100155 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:38:38.100960 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:38:38.101175 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:38:38.102625 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:38:38.102716 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:38:38.104173 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:38:38.104240 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:38:38.105766 systemd[1]: Stopped target network.target - Network. Nov 8 00:38:38.107134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:38:38.107213 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:38:38.108920 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:38:38.110266 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:38:38.113759 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:38:38.115356 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:38:38.117195 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:38:38.119537 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:38:38.119653 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:38:38.124483 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:38:38.124562 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:38:38.125345 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:38:38.125430 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:38:38.126900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:38:38.126967 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:38:38.128633 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:38:38.130371 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:38:38.132805 systemd-networkd[772]: eth0: DHCPv6 lease lost Nov 8 00:38:38.134993 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:38:38.135174 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:38:38.137019 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:38:38.137078 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:38:38.147272 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:38:38.148023 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:38:38.148102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:38:38.150334 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:38:38.151593 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:38:38.151794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:38:38.164287 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:38:38.164529 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:38:38.167711 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:38:38.168024 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:38:38.169573 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:38:38.169631 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:38:38.171092 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:38:38.171160 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:38:38.173357 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:38:38.173429 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:38:38.174828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:38:38.174902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:38:38.181929 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:38:38.183063 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:38:38.183140 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:38:38.185906 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:38:38.185978 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:38:38.186759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:38:38.186828 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:38:38.189137 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:38:38.189208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:38:38.190005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:38:38.190071 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:38:38.193917 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:38:38.194120 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:38:38.195412 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:38:38.195536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:38:38.228385 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:38:38.228572 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:38:38.230716 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:38:38.231501 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:38:38.231590 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:38:38.239897 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:38:38.264531 systemd[1]: Switching root. Nov 8 00:38:38.300527 systemd-journald[201]: Journal stopped Nov 8 00:38:39.830098 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 8 00:38:39.830220 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:38:39.830263 kernel: SELinux: policy capability open_perms=1 Nov 8 00:38:39.830284 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:38:39.830303 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:38:39.830322 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:38:39.830348 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:38:39.830367 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:38:39.830393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:38:39.830419 kernel: audit: type=1403 audit(1762562318.605:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:38:39.830453 systemd[1]: Successfully loaded SELinux policy in 53.172ms. Nov 8 00:38:39.830491 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.406ms. Nov 8 00:38:39.830514 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:38:39.830548 systemd[1]: Detected virtualization kvm. Nov 8 00:38:39.830569 systemd[1]: Detected architecture x86-64. Nov 8 00:38:39.830595 systemd[1]: Detected first boot. Nov 8 00:38:39.830631 systemd[1]: Hostname set to . Nov 8 00:38:39.830653 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:38:39.830678 zram_generator::config[1072]: No configuration found. Nov 8 00:38:39.831748 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:38:39.831784 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:38:39.831806 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:38:39.831829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:38:39.831870 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:38:39.831891 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:38:39.831912 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:38:39.831932 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:38:39.831967 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:38:39.831989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:38:39.832029 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:38:39.832050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:38:39.832070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:38:39.832093 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:38:39.832113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:38:39.832148 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:38:39.832184 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:38:39.832231 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:38:39.832272 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:38:39.832295 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:38:39.832317 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:38:39.832347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:38:39.832369 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:38:39.832401 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:38:39.832424 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:38:39.832445 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:38:39.832466 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:38:39.832493 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:38:39.832528 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:38:39.832570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:38:39.832599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:38:39.832633 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:38:39.832655 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:38:39.832688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:38:39.832736 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:38:39.832761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:39.832801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:38:39.832830 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:38:39.832881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:38:39.832903 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:38:39.832923 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:38:39.832945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:38:39.832966 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:38:39.832985 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:38:39.833008 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:38:39.833028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:38:39.833047 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:38:39.833081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:38:39.833103 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:38:39.833144 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 8 00:38:39.833165 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 8 00:38:39.833227 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:38:39.833271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:38:39.833291 kernel: ACPI: bus type drm_connector registered Nov 8 00:38:39.833312 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:38:39.833348 kernel: fuse: init (API version 7.39) Nov 8 00:38:39.833372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:38:39.833393 kernel: loop: module loaded Nov 8 00:38:39.833413 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:38:39.833463 systemd-journald[1183]: Collecting audit messages is disabled. Nov 8 00:38:39.833499 systemd-journald[1183]: Journal started Nov 8 00:38:39.833554 systemd-journald[1183]: Runtime Journal (/run/log/journal/d8990a7f7b424094aa85abb9b676a3cf) is 4.7M, max 38.0M, 33.2M free. Nov 8 00:38:39.839720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:39.847752 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:38:39.850398 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:38:39.851332 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:38:39.852310 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:38:39.853196 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:38:39.854139 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:38:39.855065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:38:39.856379 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:38:39.857780 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:38:39.859087 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:38:39.859395 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:38:39.861009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:38:39.861285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:38:39.862836 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:38:39.863115 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:38:39.864457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:38:39.864808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:38:39.866068 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:38:39.866349 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:38:39.867540 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:38:39.870044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:38:39.873623 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:38:39.876927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:38:39.878613 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:38:39.894607 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:38:39.902823 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:38:39.914404 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:38:39.915267 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:38:39.921894 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:38:39.938879 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:38:39.941799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:38:39.946483 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:38:39.947885 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:38:39.955907 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:38:39.962017 systemd-journald[1183]: Time spent on flushing to /var/log/journal/d8990a7f7b424094aa85abb9b676a3cf is 50.802ms for 1122 entries. Nov 8 00:38:39.962017 systemd-journald[1183]: System Journal (/var/log/journal/d8990a7f7b424094aa85abb9b676a3cf) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:38:40.041929 systemd-journald[1183]: Received client request to flush runtime journal. Nov 8 00:38:39.970402 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:38:39.987258 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:38:39.995800 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:38:39.998350 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:38:40.001501 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:38:40.048565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:38:40.071265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:38:40.076103 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 8 00:38:40.076128 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Nov 8 00:38:40.094738 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:38:40.102892 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:38:40.137433 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:38:40.150534 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:38:40.170493 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:38:40.183988 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:38:40.190260 udevadm[1247]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:38:40.221051 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 8 00:38:40.221092 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 8 00:38:40.228753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:38:40.725242 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:38:40.731992 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:38:40.779023 systemd-udevd[1256]: Using default interface naming scheme 'v255'. Nov 8 00:38:40.807452 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:38:40.818892 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:38:40.852000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:38:40.954723 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:38:40.972429 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 8 00:38:40.977013 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:38:41.030715 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1257) Nov 8 00:38:41.048721 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 00:38:41.068906 systemd-networkd[1261]: lo: Link UP Nov 8 00:38:41.069419 systemd-networkd[1261]: lo: Gained carrier Nov 8 00:38:41.074713 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:38:41.076842 systemd-networkd[1261]: Enumeration completed Nov 8 00:38:41.077429 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:38:41.077435 systemd-networkd[1261]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:38:41.078942 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:38:41.083375 systemd-networkd[1261]: eth0: Link UP Nov 8 00:38:41.085868 systemd-networkd[1261]: eth0: Gained carrier Nov 8 00:38:41.085995 systemd-networkd[1261]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:38:41.111864 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:38:41.113635 systemd-networkd[1261]: eth0: DHCPv4 address 10.230.37.190/30, gateway 10.230.37.189 acquired from 10.230.37.189 Nov 8 00:38:41.158706 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:38:41.162419 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:38:41.162826 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:38:41.170495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:38:41.208895 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 00:38:41.247022 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:38:41.425667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:38:41.444916 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:38:41.451928 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:38:41.472083 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:38:41.508242 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:38:41.510132 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:38:41.515883 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:38:41.532459 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:38:41.562830 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:38:41.564430 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:38:41.565926 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:38:41.566116 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:38:41.566971 systemd[1]: Reached target machines.target - Containers. Nov 8 00:38:41.569808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:38:41.576906 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:38:41.579868 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:38:41.583206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:38:41.584934 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:38:41.605038 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:38:41.612893 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:38:41.616095 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:38:41.633959 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:38:41.649727 kernel: loop0: detected capacity change from 0 to 8 Nov 8 00:38:41.662747 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:38:41.665260 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:38:41.666610 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:38:41.693739 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:38:41.741975 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:38:41.786778 kernel: loop3: detected capacity change from 0 to 224512 Nov 8 00:38:41.834563 kernel: loop4: detected capacity change from 0 to 8 Nov 8 00:38:41.840769 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:38:41.867724 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:38:41.889713 kernel: loop7: detected capacity change from 0 to 224512 Nov 8 00:38:41.904754 (sd-merge)[1321]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 8 00:38:41.906126 (sd-merge)[1321]: Merged extensions into '/usr'. Nov 8 00:38:41.912702 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:38:41.913128 systemd[1]: Reloading... Nov 8 00:38:41.980610 zram_generator::config[1349]: No configuration found. Nov 8 00:38:42.221796 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:38:42.241567 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:38:42.333092 systemd[1]: Reloading finished in 419 ms. Nov 8 00:38:42.355382 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:38:42.356866 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:38:42.371009 systemd[1]: Starting ensure-sysext.service... Nov 8 00:38:42.374923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:38:42.383232 systemd[1]: Reloading requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:38:42.383400 systemd[1]: Reloading... Nov 8 00:38:42.428088 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:38:42.429367 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:38:42.431267 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:38:42.431971 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 8 00:38:42.432224 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. Nov 8 00:38:42.437596 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:38:42.437756 systemd-tmpfiles[1413]: Skipping /boot Nov 8 00:38:42.456081 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:38:42.456271 systemd-tmpfiles[1413]: Skipping /boot Nov 8 00:38:42.494739 zram_generator::config[1441]: No configuration found. Nov 8 00:38:42.499255 systemd-networkd[1261]: eth0: Gained IPv6LL Nov 8 00:38:42.686569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:38:42.779748 systemd[1]: Reloading finished in 395 ms. Nov 8 00:38:42.801054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:38:42.809544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:38:42.828000 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:38:42.832887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:38:42.837929 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:38:42.852822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:38:42.860892 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:38:42.873317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:42.873878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:38:42.883015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:38:42.887350 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:38:42.899002 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:38:42.905014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:38:42.905204 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:42.918969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:38:42.919376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:38:42.931769 augenrules[1532]: No rules Nov 8 00:38:42.934603 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:38:42.941669 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:38:42.944220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:38:42.960169 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:38:42.964104 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:38:42.966649 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:38:42.967677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:38:42.981113 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:42.982094 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:38:42.991867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:38:42.995681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:38:43.004893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:38:43.008403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:38:43.015452 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:38:43.018809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:38:43.026794 systemd[1]: Finished ensure-sysext.service. Nov 8 00:38:43.031544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:38:43.031869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:38:43.033238 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:38:43.033479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:38:43.040634 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:38:43.054034 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:38:43.057349 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:38:43.060972 systemd-resolved[1517]: Positive Trust Anchors: Nov 8 00:38:43.061459 systemd-resolved[1517]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:38:43.061510 systemd-resolved[1517]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:38:43.061596 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:38:43.067933 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:38:43.071645 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:38:43.075320 systemd-resolved[1517]: Using system hostname 'srv-77jcb.gb1.brightbox.com'. Nov 8 00:38:43.078259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:38:43.078368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:38:43.082899 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:38:43.084342 systemd[1]: Reached target network.target - Network. Nov 8 00:38:43.085218 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:38:43.085954 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:38:43.150893 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:38:43.152252 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:38:43.153183 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:38:43.154070 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:38:43.154946 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:38:43.155855 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:38:43.155902 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:38:43.156573 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:38:43.157578 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:38:43.158522 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:38:43.159346 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:38:43.161271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:38:43.164822 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:38:43.167580 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:38:43.169822 systemd-networkd[1261]: eth0: Ignoring DHCPv6 address 2a02:1348:179:896f:24:19ff:fee6:25be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:896f:24:19ff:fee6:25be/64 assigned by NDisc. Nov 8 00:38:43.169834 systemd-networkd[1261]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 8 00:38:43.170032 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:38:43.170823 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:38:43.171544 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:38:43.172520 systemd[1]: System is tainted: cgroupsv1 Nov 8 00:38:43.172584 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:38:43.172626 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:38:43.180895 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:38:43.188917 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:38:43.192865 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:38:43.199849 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:38:43.209883 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:38:43.213923 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:38:43.221701 jq[1574]: false Nov 8 00:38:43.225810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:43.238639 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:38:43.243679 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:38:43.259583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:38:43.271471 dbus-daemon[1573]: [system] SELinux support is enabled Nov 8 00:38:43.273915 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:38:43.281888 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:38:43.283098 dbus-daemon[1573]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1261 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:38:43.294879 extend-filesystems[1577]: Found loop4 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found loop5 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found loop6 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found loop7 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda1 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda2 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda3 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found usr Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda4 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda6 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda7 Nov 8 00:38:43.294879 extend-filesystems[1577]: Found vda9 Nov 8 00:38:43.294879 extend-filesystems[1577]: Checking size of /dev/vda9 Nov 8 00:38:43.337066 extend-filesystems[1577]: Resized partition /dev/vda9 Nov 8 00:38:43.298892 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:38:43.301439 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:38:43.322489 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:38:43.335093 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:38:43.343388 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:38:43.346964 extend-filesystems[1606]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:38:43.365780 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 8 00:38:43.364017 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:38:43.364380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:38:43.367434 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:38:43.367986 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:38:43.376264 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:38:43.376613 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:38:43.392984 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:38:43.407341 jq[1605]: true Nov 8 00:38:43.410637 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 8 00:38:43.431538 update_engine[1601]: I20251108 00:38:43.424357 1601 main.cc:92] Flatcar Update Engine starting Nov 8 00:38:43.450608 update_engine[1601]: I20251108 00:38:43.446302 1601 update_check_scheduler.cc:74] Next update check in 3m5s Nov 8 00:38:43.463413 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1263) Nov 8 00:38:43.462417 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:38:43.463999 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:38:43.465502 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:38:43.467737 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:38:43.470860 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:38:43.472606 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:38:43.472644 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:38:43.474216 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:38:43.478064 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:38:43.494660 tar[1613]: linux-amd64/LICENSE Nov 8 00:38:43.494660 tar[1613]: linux-amd64/helm Nov 8 00:38:43.501352 jq[1625]: true Nov 8 00:38:44.300953 systemd-timesyncd[1559]: Contacted time server 178.62.68.79:123 (0.flatcar.pool.ntp.org). Nov 8 00:38:44.301043 systemd-timesyncd[1559]: Initial clock synchronization to Sat 2025-11-08 00:38:44.300701 UTC. Nov 8 00:38:44.301149 systemd-resolved[1517]: Clock change detected. Flushing caches. Nov 8 00:38:44.364537 systemd-logind[1595]: Watching system buttons on /dev/input/event2 (Power Button) Nov 8 00:38:44.364612 systemd-logind[1595]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:38:44.369856 systemd-logind[1595]: New seat seat0. Nov 8 00:38:44.372455 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:38:44.449175 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 8 00:38:44.449265 bash[1653]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:38:44.454734 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:38:44.480539 systemd[1]: Starting sshkeys.service... Nov 8 00:38:44.498040 extend-filesystems[1606]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:38:44.498040 extend-filesystems[1606]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 8 00:38:44.498040 extend-filesystems[1606]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 8 00:38:44.492653 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:38:44.514312 extend-filesystems[1577]: Resized filesystem in /dev/vda9 Nov 8 00:38:44.493003 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:38:44.528783 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:38:44.542100 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:38:44.587080 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:38:44.587775 dbus-daemon[1573]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:38:44.587306 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:38:44.600560 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:38:44.648594 polkitd[1667]: Started polkitd version 121 Nov 8 00:38:44.659104 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:38:44.665280 polkitd[1667]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:38:44.665388 polkitd[1667]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:38:44.667249 polkitd[1667]: Finished loading, compiling and executing 2 rules Nov 8 00:38:44.670431 dbus-daemon[1573]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:38:44.670699 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:38:44.680198 polkitd[1667]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:38:44.718855 systemd-hostnamed[1630]: Hostname set to (static) Nov 8 00:38:44.733121 containerd[1626]: time="2025-11-08T00:38:44.732969526Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:38:44.817651 containerd[1626]: time="2025-11-08T00:38:44.817588307Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.825498358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.825563199Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.825598909Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.825874811Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.825914054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.826032216Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:44.826352 containerd[1626]: time="2025-11-08T00:38:44.826054850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.827367786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.827400674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.827424564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.827441470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.827614119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.828055 containerd[1626]: time="2025-11-08T00:38:44.828013319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:38:44.833030 containerd[1626]: time="2025-11-08T00:38:44.831760281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:38:44.833030 containerd[1626]: time="2025-11-08T00:38:44.831790972Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:38:44.833030 containerd[1626]: time="2025-11-08T00:38:44.831943536Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:38:44.833030 containerd[1626]: time="2025-11-08T00:38:44.832046121Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:38:44.846755 containerd[1626]: time="2025-11-08T00:38:44.846620246Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:38:44.846885 containerd[1626]: time="2025-11-08T00:38:44.846760044Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:38:44.846885 containerd[1626]: time="2025-11-08T00:38:44.846846674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:38:44.846885 containerd[1626]: time="2025-11-08T00:38:44.846878571Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:38:44.847027 containerd[1626]: time="2025-11-08T00:38:44.846909785Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847198039Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847698270Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847902127Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847933283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847954546Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.847975685Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.848002825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.848033671Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.848057529Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.848935 containerd[1626]: time="2025-11-08T00:38:44.848104181Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.850993153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851047004Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851071986Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851116089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851232349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851257881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851295152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851323707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851351798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851372907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851415 containerd[1626]: time="2025-11-08T00:38:44.851401554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851424347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851461024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851482989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851513637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851551622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851583851Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851632587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851656479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.851870 containerd[1626]: time="2025-11-08T00:38:44.851674889Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853177520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853357950Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853382279Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853402400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853421325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853447649Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853473103Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:38:44.853791 containerd[1626]: time="2025-11-08T00:38:44.853493313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:38:44.855387 containerd[1626]: time="2025-11-08T00:38:44.854968288Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:38:44.855387 containerd[1626]: time="2025-11-08T00:38:44.855085442Z" level=info msg="Connect containerd service" Nov 8 00:38:44.864211 containerd[1626]: time="2025-11-08T00:38:44.863273008Z" level=info msg="using legacy CRI server" Nov 8 00:38:44.864211 containerd[1626]: time="2025-11-08T00:38:44.863326531Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:38:44.864211 containerd[1626]: time="2025-11-08T00:38:44.863605456Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:38:44.867434 containerd[1626]: time="2025-11-08T00:38:44.866852189Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:38:44.869513 containerd[1626]: time="2025-11-08T00:38:44.869459953Z" level=info msg="Start subscribing containerd event" Nov 8 00:38:44.869586 containerd[1626]: time="2025-11-08T00:38:44.869547474Z" level=info msg="Start recovering state" Nov 8 00:38:44.870509 containerd[1626]: time="2025-11-08T00:38:44.869695090Z" level=info msg="Start event monitor" Nov 8 00:38:44.870509 containerd[1626]: time="2025-11-08T00:38:44.869742899Z" level=info msg="Start snapshots syncer" Nov 8 00:38:44.870509 containerd[1626]: time="2025-11-08T00:38:44.869769052Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:38:44.870509 containerd[1626]: time="2025-11-08T00:38:44.869792410Z" level=info msg="Start streaming server" Nov 8 00:38:44.876986 containerd[1626]: time="2025-11-08T00:38:44.875515214Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:38:44.876986 containerd[1626]: time="2025-11-08T00:38:44.875645770Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:38:44.876986 containerd[1626]: time="2025-11-08T00:38:44.875850057Z" level=info msg="containerd successfully booted in 0.145694s" Nov 8 00:38:44.876023 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:38:45.364999 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:38:45.432606 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:38:45.448731 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:38:45.465675 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:38:45.466060 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:38:45.476572 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:38:45.506362 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:38:45.516963 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:38:45.528902 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:38:45.530778 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:38:45.646607 tar[1613]: linux-amd64/README.md Nov 8 00:38:45.671922 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:38:45.697394 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:45.705683 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:38:46.371351 kubelet[1716]: E1108 00:38:46.371254 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:38:46.373824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:38:46.374266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:38:50.601366 login[1700]: pam_lastlog(login:session): file /var/log/lastlog is locked/read, retrying Nov 8 00:38:50.604003 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:38:50.621700 systemd-logind[1595]: New session 2 of user core. Nov 8 00:38:50.623984 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:38:50.632686 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:38:50.661205 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:38:50.669732 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:38:50.687526 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:38:50.831119 systemd[1734]: Queued start job for default target default.target. Nov 8 00:38:50.831725 systemd[1734]: Created slice app.slice - User Application Slice. Nov 8 00:38:50.831775 systemd[1734]: Reached target paths.target - Paths. Nov 8 00:38:50.831798 systemd[1734]: Reached target timers.target - Timers. Nov 8 00:38:50.839256 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:38:50.848765 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:38:50.848842 systemd[1734]: Reached target sockets.target - Sockets. Nov 8 00:38:50.848866 systemd[1734]: Reached target basic.target - Basic System. Nov 8 00:38:50.848929 systemd[1734]: Reached target default.target - Main User Target. Nov 8 00:38:50.848989 systemd[1734]: Startup finished in 151ms. Nov 8 00:38:50.849138 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:38:50.863804 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:38:50.952675 coreos-metadata[1572]: Nov 08 00:38:50.952 WARN failed to locate config-drive, using the metadata service API instead Nov 8 00:38:50.980300 coreos-metadata[1572]: Nov 08 00:38:50.980 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 8 00:38:50.987217 coreos-metadata[1572]: Nov 08 00:38:50.987 INFO Fetch failed with 404: resource not found Nov 8 00:38:50.987217 coreos-metadata[1572]: Nov 08 00:38:50.987 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 8 00:38:50.991823 coreos-metadata[1572]: Nov 08 00:38:50.991 INFO Fetch successful Nov 8 00:38:50.991966 coreos-metadata[1572]: Nov 08 00:38:50.991 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 8 00:38:51.004615 coreos-metadata[1572]: Nov 08 00:38:51.004 INFO Fetch successful Nov 8 00:38:51.004615 coreos-metadata[1572]: Nov 08 00:38:51.004 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 8 00:38:51.030672 coreos-metadata[1572]: Nov 08 00:38:51.030 INFO Fetch successful Nov 8 00:38:51.030787 coreos-metadata[1572]: Nov 08 00:38:51.030 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 8 00:38:51.045862 coreos-metadata[1572]: Nov 08 00:38:51.045 INFO Fetch successful Nov 8 00:38:51.045862 coreos-metadata[1572]: Nov 08 00:38:51.045 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 8 00:38:51.062144 coreos-metadata[1572]: Nov 08 00:38:51.062 INFO Fetch successful Nov 8 00:38:51.096202 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:38:51.097996 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:38:51.603400 login[1700]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:38:51.610567 systemd-logind[1595]: New session 1 of user core. Nov 8 00:38:51.621616 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:38:51.692486 coreos-metadata[1664]: Nov 08 00:38:51.692 WARN failed to locate config-drive, using the metadata service API instead Nov 8 00:38:51.717063 coreos-metadata[1664]: Nov 08 00:38:51.717 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 8 00:38:51.739608 coreos-metadata[1664]: Nov 08 00:38:51.739 INFO Fetch successful Nov 8 00:38:51.739775 coreos-metadata[1664]: Nov 08 00:38:51.739 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 00:38:51.771046 coreos-metadata[1664]: Nov 08 00:38:51.770 INFO Fetch successful Nov 8 00:38:51.772977 unknown[1664]: wrote ssh authorized keys file for user: core Nov 8 00:38:51.789787 update-ssh-keys[1777]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:38:51.790587 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:38:51.796843 systemd[1]: Finished sshkeys.service. Nov 8 00:38:51.802593 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:38:51.802788 systemd[1]: Startup finished in 15.290s (kernel) + 12.614s (userspace) = 27.904s. Nov 8 00:38:52.890955 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:38:52.905546 systemd[1]: Started sshd@0-10.230.37.190:22-139.178.68.195:50232.service - OpenSSH per-connection server daemon (139.178.68.195:50232). Nov 8 00:38:53.816089 sshd[1785]: Accepted publickey for core from 139.178.68.195 port 50232 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:38:53.818357 sshd[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:53.825200 systemd-logind[1595]: New session 3 of user core. Nov 8 00:38:53.836613 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:38:54.594471 systemd[1]: Started sshd@1-10.230.37.190:22-139.178.68.195:36830.service - OpenSSH per-connection server daemon (139.178.68.195:36830). Nov 8 00:38:55.499679 sshd[1790]: Accepted publickey for core from 139.178.68.195 port 36830 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:38:55.501745 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:55.509094 systemd-logind[1595]: New session 4 of user core. Nov 8 00:38:55.517744 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:38:56.134180 sshd[1790]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:56.139957 systemd[1]: sshd@1-10.230.37.190:22-139.178.68.195:36830.service: Deactivated successfully. Nov 8 00:38:56.141324 systemd-logind[1595]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:38:56.143785 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:38:56.144815 systemd-logind[1595]: Removed session 4. Nov 8 00:38:56.286498 systemd[1]: Started sshd@2-10.230.37.190:22-139.178.68.195:36838.service - OpenSSH per-connection server daemon (139.178.68.195:36838). Nov 8 00:38:56.624523 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:38:56.634389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:38:56.804371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:38:56.812330 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:38:56.919871 kubelet[1812]: E1108 00:38:56.919688 1812 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:38:56.924344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:38:56.924759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:38:57.187452 sshd[1798]: Accepted publickey for core from 139.178.68.195 port 36838 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:38:57.189539 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:57.197698 systemd-logind[1595]: New session 5 of user core. Nov 8 00:38:57.203611 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:38:57.814569 sshd[1798]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:57.818441 systemd[1]: sshd@2-10.230.37.190:22-139.178.68.195:36838.service: Deactivated successfully. Nov 8 00:38:57.822799 systemd-logind[1595]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:38:57.823702 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:38:57.825250 systemd-logind[1595]: Removed session 5. Nov 8 00:38:57.966450 systemd[1]: Started sshd@3-10.230.37.190:22-139.178.68.195:36850.service - OpenSSH per-connection server daemon (139.178.68.195:36850). Nov 8 00:38:58.880053 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 36850 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:38:58.882094 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:38:58.890256 systemd-logind[1595]: New session 6 of user core. Nov 8 00:38:58.895560 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:38:59.513632 sshd[1827]: pam_unix(sshd:session): session closed for user core Nov 8 00:38:59.518436 systemd[1]: sshd@3-10.230.37.190:22-139.178.68.195:36850.service: Deactivated successfully. Nov 8 00:38:59.522242 systemd-logind[1595]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:38:59.523009 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:38:59.524539 systemd-logind[1595]: Removed session 6. Nov 8 00:38:59.725466 systemd[1]: Started sshd@4-10.230.37.190:22-139.178.68.195:36862.service - OpenSSH per-connection server daemon (139.178.68.195:36862). Nov 8 00:39:00.763416 sshd[1835]: Accepted publickey for core from 139.178.68.195 port 36862 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:39:00.765401 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:00.773771 systemd-logind[1595]: New session 7 of user core. Nov 8 00:39:00.776818 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:39:01.309291 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:39:01.309809 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:39:01.323707 sudo[1839]: pam_unix(sudo:session): session closed for user root Nov 8 00:39:01.472085 sshd[1835]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:01.477352 systemd[1]: sshd@4-10.230.37.190:22-139.178.68.195:36862.service: Deactivated successfully. Nov 8 00:39:01.480763 systemd-logind[1595]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:39:01.481528 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:39:01.483590 systemd-logind[1595]: Removed session 7. Nov 8 00:39:01.626479 systemd[1]: Started sshd@5-10.230.37.190:22-139.178.68.195:36876.service - OpenSSH per-connection server daemon (139.178.68.195:36876). Nov 8 00:39:02.530472 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 36876 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:39:02.532648 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:02.539615 systemd-logind[1595]: New session 8 of user core. Nov 8 00:39:02.545632 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:39:03.019164 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:39:03.019644 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:39:03.025459 sudo[1849]: pam_unix(sudo:session): session closed for user root Nov 8 00:39:03.033619 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:39:03.034097 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:39:03.059503 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:39:03.061742 auditctl[1852]: No rules Nov 8 00:39:03.062499 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:39:03.062874 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:39:03.073118 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:39:03.106578 augenrules[1871]: No rules Nov 8 00:39:03.108248 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:39:03.110847 sudo[1848]: pam_unix(sudo:session): session closed for user root Nov 8 00:39:03.268545 sshd[1844]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:03.273427 systemd[1]: sshd@5-10.230.37.190:22-139.178.68.195:36876.service: Deactivated successfully. Nov 8 00:39:03.277984 systemd-logind[1595]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:39:03.278173 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:39:03.280365 systemd-logind[1595]: Removed session 8. Nov 8 00:39:03.415724 systemd[1]: Started sshd@6-10.230.37.190:22-139.178.68.195:43960.service - OpenSSH per-connection server daemon (139.178.68.195:43960). Nov 8 00:39:04.333974 sshd[1880]: Accepted publickey for core from 139.178.68.195 port 43960 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:39:04.336108 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:39:04.343529 systemd-logind[1595]: New session 9 of user core. Nov 8 00:39:04.350584 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:39:04.824266 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:39:04.824749 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:39:05.297467 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:39:05.298267 (dockerd)[1901]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:39:05.735748 dockerd[1901]: time="2025-11-08T00:39:05.735538747Z" level=info msg="Starting up" Nov 8 00:39:05.860246 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport355454136-merged.mount: Deactivated successfully. Nov 8 00:39:06.045093 dockerd[1901]: time="2025-11-08T00:39:06.044486800Z" level=info msg="Loading containers: start." Nov 8 00:39:06.178251 kernel: Initializing XFRM netlink socket Nov 8 00:39:06.293050 systemd-networkd[1261]: docker0: Link UP Nov 8 00:39:06.315001 dockerd[1901]: time="2025-11-08T00:39:06.314113566Z" level=info msg="Loading containers: done." Nov 8 00:39:06.334601 dockerd[1901]: time="2025-11-08T00:39:06.334512512Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:39:06.334811 dockerd[1901]: time="2025-11-08T00:39:06.334656171Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:39:06.334865 dockerd[1901]: time="2025-11-08T00:39:06.334836637Z" level=info msg="Daemon has completed initialization" Nov 8 00:39:06.413522 dockerd[1901]: time="2025-11-08T00:39:06.412486254Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:39:06.412927 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:39:06.856397 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3160464041-merged.mount: Deactivated successfully. Nov 8 00:39:07.174936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:39:07.182401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:07.341365 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:07.354773 (kubelet)[2053]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:39:07.505316 kubelet[2053]: E1108 00:39:07.504985 2053 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:39:07.508015 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:39:07.508341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:39:07.533242 containerd[1626]: time="2025-11-08T00:39:07.533009273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:39:08.487900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745354689.mount: Deactivated successfully. Nov 8 00:39:10.569103 containerd[1626]: time="2025-11-08T00:39:10.569041518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:10.572227 containerd[1626]: time="2025-11-08T00:39:10.572169769Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 8 00:39:10.572973 containerd[1626]: time="2025-11-08T00:39:10.572894070Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:10.578161 containerd[1626]: time="2025-11-08T00:39:10.577095406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:10.579117 containerd[1626]: time="2025-11-08T00:39:10.578841804Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.045604973s" Nov 8 00:39:10.579117 containerd[1626]: time="2025-11-08T00:39:10.578914586Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 00:39:10.579836 containerd[1626]: time="2025-11-08T00:39:10.579777385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:39:13.113066 containerd[1626]: time="2025-11-08T00:39:13.112984034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:13.114967 containerd[1626]: time="2025-11-08T00:39:13.114638677Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 8 00:39:13.118176 containerd[1626]: time="2025-11-08T00:39:13.116204802Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:13.120307 containerd[1626]: time="2025-11-08T00:39:13.120267945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:13.122324 containerd[1626]: time="2025-11-08T00:39:13.122277160Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.542457427s" Nov 8 00:39:13.122463 containerd[1626]: time="2025-11-08T00:39:13.122436172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 00:39:13.123233 containerd[1626]: time="2025-11-08T00:39:13.123194115Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:39:14.758773 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:39:14.853415 containerd[1626]: time="2025-11-08T00:39:14.853350246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:14.855076 containerd[1626]: time="2025-11-08T00:39:14.854932668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 8 00:39:14.857195 containerd[1626]: time="2025-11-08T00:39:14.856570022Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:14.860906 containerd[1626]: time="2025-11-08T00:39:14.860860324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:14.862727 containerd[1626]: time="2025-11-08T00:39:14.862690185Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.739445715s" Nov 8 00:39:14.862907 containerd[1626]: time="2025-11-08T00:39:14.862876227Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 00:39:14.863719 containerd[1626]: time="2025-11-08T00:39:14.863671269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:39:16.675215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740737836.mount: Deactivated successfully. Nov 8 00:39:17.398635 containerd[1626]: time="2025-11-08T00:39:17.397593211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:17.398635 containerd[1626]: time="2025-11-08T00:39:17.398579227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 8 00:39:17.399392 containerd[1626]: time="2025-11-08T00:39:17.399357009Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:17.402764 containerd[1626]: time="2025-11-08T00:39:17.402711321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:17.403930 containerd[1626]: time="2025-11-08T00:39:17.403893779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.540060659s" Nov 8 00:39:17.404058 containerd[1626]: time="2025-11-08T00:39:17.404030589Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 00:39:17.404687 containerd[1626]: time="2025-11-08T00:39:17.404658545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:39:17.758813 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:39:17.769658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:17.963373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:17.968203 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:39:18.022773 kubelet[2152]: E1108 00:39:18.022277 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:39:18.027381 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:39:18.027771 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:39:18.413067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1888719764.mount: Deactivated successfully. Nov 8 00:39:19.662065 containerd[1626]: time="2025-11-08T00:39:19.662005500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:19.664162 containerd[1626]: time="2025-11-08T00:39:19.663643872Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 8 00:39:19.666156 containerd[1626]: time="2025-11-08T00:39:19.664427570Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:19.669107 containerd[1626]: time="2025-11-08T00:39:19.669066597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:19.671447 containerd[1626]: time="2025-11-08T00:39:19.671409155Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.266447517s" Nov 8 00:39:19.671577 containerd[1626]: time="2025-11-08T00:39:19.671549423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 00:39:19.672868 containerd[1626]: time="2025-11-08T00:39:19.672771739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:39:20.360081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515632467.mount: Deactivated successfully. Nov 8 00:39:20.371080 containerd[1626]: time="2025-11-08T00:39:20.371028494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:20.371869 containerd[1626]: time="2025-11-08T00:39:20.371824717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 8 00:39:20.373432 containerd[1626]: time="2025-11-08T00:39:20.373398653Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:20.400930 containerd[1626]: time="2025-11-08T00:39:20.400863352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:20.402468 containerd[1626]: time="2025-11-08T00:39:20.402426805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 729.547501ms" Nov 8 00:39:20.402562 containerd[1626]: time="2025-11-08T00:39:20.402475508Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:39:20.403752 containerd[1626]: time="2025-11-08T00:39:20.403442058Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:39:21.237122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508354278.mount: Deactivated successfully. Nov 8 00:39:23.949908 containerd[1626]: time="2025-11-08T00:39:23.949801650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:23.951800 containerd[1626]: time="2025-11-08T00:39:23.951662824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 8 00:39:23.953152 containerd[1626]: time="2025-11-08T00:39:23.952608572Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:23.959161 containerd[1626]: time="2025-11-08T00:39:23.957502999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:23.961075 containerd[1626]: time="2025-11-08T00:39:23.961031728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.557549551s" Nov 8 00:39:23.961237 containerd[1626]: time="2025-11-08T00:39:23.961208470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 00:39:27.424812 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:27.440492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:27.479398 systemd[1]: Reloading requested from client PID 2300 ('systemctl') (unit session-9.scope)... Nov 8 00:39:27.479640 systemd[1]: Reloading... Nov 8 00:39:27.630162 zram_generator::config[2335]: No configuration found. Nov 8 00:39:27.888061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:39:28.000390 systemd[1]: Reloading finished in 519 ms. Nov 8 00:39:28.047656 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:39:28.047782 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:39:28.049250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:28.058946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:28.355377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:28.361152 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:39:28.427885 kubelet[2411]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:39:28.427885 kubelet[2411]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:39:28.428494 kubelet[2411]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:39:28.428494 kubelet[2411]: I1108 00:39:28.428090 2411 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:39:28.914613 update_engine[1601]: I20251108 00:39:28.913348 1601 update_attempter.cc:509] Updating boot flags... Nov 8 00:39:28.989789 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2426) Nov 8 00:39:29.008623 kubelet[2411]: I1108 00:39:29.008561 2411 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:39:29.008623 kubelet[2411]: I1108 00:39:29.008619 2411 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:39:29.009539 kubelet[2411]: I1108 00:39:29.009507 2411 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:39:29.078165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2430) Nov 8 00:39:29.107060 kubelet[2411]: E1108 00:39:29.107014 2411 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.37.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:29.112494 kubelet[2411]: I1108 00:39:29.112398 2411 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:39:29.161045 kubelet[2411]: E1108 00:39:29.160991 2411 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:39:29.161231 kubelet[2411]: I1108 00:39:29.161048 2411 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:39:29.169703 kubelet[2411]: I1108 00:39:29.169232 2411 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:39:29.173076 kubelet[2411]: I1108 00:39:29.173004 2411 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:39:29.173414 kubelet[2411]: I1108 00:39:29.173064 2411 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-77jcb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:39:29.175244 kubelet[2411]: I1108 00:39:29.175205 2411 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:39:29.175360 kubelet[2411]: I1108 00:39:29.175247 2411 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:39:29.176754 kubelet[2411]: I1108 00:39:29.176690 2411 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:39:29.182194 kubelet[2411]: I1108 00:39:29.182148 2411 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:39:29.182291 kubelet[2411]: I1108 00:39:29.182202 2411 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:39:29.182732 kubelet[2411]: I1108 00:39:29.182254 2411 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:39:29.182732 kubelet[2411]: I1108 00:39:29.182710 2411 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:39:29.188716 kubelet[2411]: I1108 00:39:29.188504 2411 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:39:29.191971 kubelet[2411]: I1108 00:39:29.191760 2411 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:39:29.194150 kubelet[2411]: W1108 00:39:29.192624 2411 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:39:29.194150 kubelet[2411]: I1108 00:39:29.193753 2411 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:39:29.194150 kubelet[2411]: I1108 00:39:29.193821 2411 server.go:1287] "Started kubelet" Nov 8 00:39:29.194360 kubelet[2411]: W1108 00:39:29.194079 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.37.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-77jcb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:29.194789 kubelet[2411]: E1108 00:39:29.194496 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.37.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-77jcb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:29.201744 kubelet[2411]: W1108 00:39:29.201678 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.37.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:29.201862 kubelet[2411]: E1108 00:39:29.201750 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.37.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:29.201945 kubelet[2411]: I1108 00:39:29.201856 2411 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:39:29.205266 kubelet[2411]: I1108 00:39:29.204580 2411 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:39:29.206374 kubelet[2411]: I1108 00:39:29.206347 2411 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:39:29.208187 kubelet[2411]: I1108 00:39:29.208148 2411 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:39:29.209960 kubelet[2411]: I1108 00:39:29.209936 2411 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:39:29.217048 kubelet[2411]: E1108 00:39:29.212825 2411 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.37.190:6443/api/v1/namespaces/default/events\": dial tcp 10.230.37.190:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-77jcb.gb1.brightbox.com.1875e122e644449c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-77jcb.gb1.brightbox.com,UID:srv-77jcb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-77jcb.gb1.brightbox.com,},FirstTimestamp:2025-11-08 00:39:29.193788572 +0000 UTC m=+0.827000813,LastTimestamp:2025-11-08 00:39:29.193788572 +0000 UTC m=+0.827000813,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-77jcb.gb1.brightbox.com,}" Nov 8 00:39:29.220189 kubelet[2411]: E1108 00:39:29.220004 2411 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:39:29.220592 kubelet[2411]: I1108 00:39:29.220564 2411 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:39:29.222153 kubelet[2411]: I1108 00:39:29.222113 2411 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:39:29.222608 kubelet[2411]: E1108 00:39:29.222580 2411 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-77jcb.gb1.brightbox.com\" not found" Nov 8 00:39:29.224462 kubelet[2411]: E1108 00:39:29.224360 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.37.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-77jcb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.37.190:6443: connect: connection refused" interval="200ms" Nov 8 00:39:29.224661 kubelet[2411]: I1108 00:39:29.224633 2411 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:39:29.224791 kubelet[2411]: I1108 00:39:29.224760 2411 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:39:29.227647 kubelet[2411]: I1108 00:39:29.227371 2411 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:39:29.227647 kubelet[2411]: I1108 00:39:29.227462 2411 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:39:29.228251 kubelet[2411]: I1108 00:39:29.228225 2411 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:39:29.239363 kubelet[2411]: I1108 00:39:29.239320 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:39:29.241235 kubelet[2411]: I1108 00:39:29.240947 2411 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:39:29.241235 kubelet[2411]: I1108 00:39:29.240994 2411 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:39:29.241235 kubelet[2411]: I1108 00:39:29.241029 2411 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:39:29.241235 kubelet[2411]: I1108 00:39:29.241046 2411 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:39:29.241499 kubelet[2411]: E1108 00:39:29.241459 2411 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:39:29.249328 kubelet[2411]: W1108 00:39:29.249108 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.37.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:29.249328 kubelet[2411]: E1108 00:39:29.249231 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.37.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:29.258087 kubelet[2411]: W1108 00:39:29.257855 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.37.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:29.258087 kubelet[2411]: E1108 00:39:29.257930 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.37.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:29.267405 kubelet[2411]: I1108 00:39:29.267378 2411 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:39:29.267952 kubelet[2411]: I1108 00:39:29.267550 2411 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:39:29.267952 kubelet[2411]: I1108 00:39:29.267656 2411 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:39:29.288739 kubelet[2411]: I1108 00:39:29.288706 2411 policy_none.go:49] "None policy: Start" Nov 8 00:39:29.289027 kubelet[2411]: I1108 00:39:29.288892 2411 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:39:29.289027 kubelet[2411]: I1108 00:39:29.288959 2411 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:39:29.298182 kubelet[2411]: I1108 00:39:29.297785 2411 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:39:29.298182 kubelet[2411]: I1108 00:39:29.298080 2411 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:39:29.298344 kubelet[2411]: I1108 00:39:29.298105 2411 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:39:29.299502 kubelet[2411]: I1108 00:39:29.299479 2411 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:39:29.300829 kubelet[2411]: E1108 00:39:29.300796 2411 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:39:29.300925 kubelet[2411]: E1108 00:39:29.300887 2411 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-77jcb.gb1.brightbox.com\" not found" Nov 8 00:39:29.353769 kubelet[2411]: E1108 00:39:29.353719 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.360722 kubelet[2411]: E1108 00:39:29.360693 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.361411 kubelet[2411]: E1108 00:39:29.361382 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.405092 kubelet[2411]: I1108 00:39:29.405056 2411 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.409732 kubelet[2411]: E1108 00:39:29.409696 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.37.190:6443/api/v1/nodes\": dial tcp 10.230.37.190:6443: connect: connection refused" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.425807 kubelet[2411]: E1108 00:39:29.425567 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.37.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-77jcb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.37.190:6443: connect: connection refused" interval="400ms" Nov 8 00:39:29.528868 kubelet[2411]: I1108 00:39:29.528792 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.528868 kubelet[2411]: I1108 00:39:29.528877 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d44044c14c61c5770e31a8ae0295615-kubeconfig\") pod \"kube-scheduler-srv-77jcb.gb1.brightbox.com\" (UID: \"5d44044c14c61c5770e31a8ae0295615\") " pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529601 kubelet[2411]: I1108 00:39:29.528935 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-k8s-certs\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529601 kubelet[2411]: I1108 00:39:29.528967 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-ca-certs\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529601 kubelet[2411]: I1108 00:39:29.528997 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-flexvolume-dir\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529601 kubelet[2411]: I1108 00:39:29.529022 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-k8s-certs\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529601 kubelet[2411]: I1108 00:39:29.529049 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-kubeconfig\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529841 kubelet[2411]: I1108 00:39:29.529075 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-ca-certs\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.529841 kubelet[2411]: I1108 00:39:29.529116 2411 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.614143 kubelet[2411]: I1108 00:39:29.613705 2411 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.614455 kubelet[2411]: E1108 00:39:29.614416 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.37.190:6443/api/v1/nodes\": dial tcp 10.230.37.190:6443: connect: connection refused" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:29.660181 containerd[1626]: time="2025-11-08T00:39:29.659969110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-77jcb.gb1.brightbox.com,Uid:af382d47d621604c7b543692a345bff3,Namespace:kube-system,Attempt:0,}" Nov 8 00:39:29.666332 containerd[1626]: time="2025-11-08T00:39:29.666297661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-77jcb.gb1.brightbox.com,Uid:6d43e12de03ed60955679f7ad4c759b3,Namespace:kube-system,Attempt:0,}" Nov 8 00:39:29.666808 containerd[1626]: time="2025-11-08T00:39:29.666585546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-77jcb.gb1.brightbox.com,Uid:5d44044c14c61c5770e31a8ae0295615,Namespace:kube-system,Attempt:0,}" Nov 8 00:39:29.826948 kubelet[2411]: E1108 00:39:29.826858 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.37.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-77jcb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.37.190:6443: connect: connection refused" interval="800ms" Nov 8 00:39:30.017951 kubelet[2411]: I1108 00:39:30.017422 2411 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:30.017951 kubelet[2411]: E1108 00:39:30.017803 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.37.190:6443/api/v1/nodes\": dial tcp 10.230.37.190:6443: connect: connection refused" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:30.106004 kubelet[2411]: W1108 00:39:30.105780 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.37.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-77jcb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:30.106004 kubelet[2411]: E1108 00:39:30.105865 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.37.190:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-77jcb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:30.373076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677629797.mount: Deactivated successfully. Nov 8 00:39:30.379351 containerd[1626]: time="2025-11-08T00:39:30.379258153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:39:30.380982 containerd[1626]: time="2025-11-08T00:39:30.380907527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 8 00:39:30.383157 containerd[1626]: time="2025-11-08T00:39:30.383077198Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:39:30.385215 containerd[1626]: time="2025-11-08T00:39:30.385176594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:39:30.386562 containerd[1626]: time="2025-11-08T00:39:30.386413326Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:39:30.388166 containerd[1626]: time="2025-11-08T00:39:30.387299494Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:39:30.388166 containerd[1626]: time="2025-11-08T00:39:30.387386891Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:39:30.389158 containerd[1626]: time="2025-11-08T00:39:30.388494505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:39:30.390157 containerd[1626]: time="2025-11-08T00:39:30.389853461Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 723.20198ms" Nov 8 00:39:30.395265 containerd[1626]: time="2025-11-08T00:39:30.395221515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 735.064762ms" Nov 8 00:39:30.403351 containerd[1626]: time="2025-11-08T00:39:30.403303502Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 736.937249ms" Nov 8 00:39:30.424232 kubelet[2411]: W1108 00:39:30.424118 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.37.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:30.424586 kubelet[2411]: E1108 00:39:30.424489 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.37.190:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:30.607885 containerd[1626]: time="2025-11-08T00:39:30.607419990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:30.607885 containerd[1626]: time="2025-11-08T00:39:30.607524733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:30.607885 containerd[1626]: time="2025-11-08T00:39:30.607549487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.607885 containerd[1626]: time="2025-11-08T00:39:30.607679641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.616932 containerd[1626]: time="2025-11-08T00:39:30.616814954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:30.616932 containerd[1626]: time="2025-11-08T00:39:30.616257311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:30.617239 containerd[1626]: time="2025-11-08T00:39:30.617182700Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:30.617345 containerd[1626]: time="2025-11-08T00:39:30.617172463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:30.617345 containerd[1626]: time="2025-11-08T00:39:30.617219253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.621186 containerd[1626]: time="2025-11-08T00:39:30.620773838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.621186 containerd[1626]: time="2025-11-08T00:39:30.617501981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.621186 containerd[1626]: time="2025-11-08T00:39:30.620358606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:30.628172 kubelet[2411]: E1108 00:39:30.628025 2411 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.37.190:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-77jcb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.37.190:6443: connect: connection refused" interval="1.6s" Nov 8 00:39:30.693209 kubelet[2411]: W1108 00:39:30.689101 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.37.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:30.693434 kubelet[2411]: E1108 00:39:30.693226 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.37.190:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:30.749153 kubelet[2411]: W1108 00:39:30.749059 2411 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.37.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.37.190:6443: connect: connection refused Nov 8 00:39:30.751143 kubelet[2411]: E1108 00:39:30.750998 2411 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.37.190:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:30.764302 containerd[1626]: time="2025-11-08T00:39:30.764166702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-77jcb.gb1.brightbox.com,Uid:5d44044c14c61c5770e31a8ae0295615,Namespace:kube-system,Attempt:0,} returns sandbox id \"3025ac332337083f4f3ad01b5a24fbfb464b17aedd250ccac59aac0267f213ea\"" Nov 8 00:39:30.778188 containerd[1626]: time="2025-11-08T00:39:30.777942221Z" level=info msg="CreateContainer within sandbox \"3025ac332337083f4f3ad01b5a24fbfb464b17aedd250ccac59aac0267f213ea\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:39:30.787003 containerd[1626]: time="2025-11-08T00:39:30.786961146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-77jcb.gb1.brightbox.com,Uid:6d43e12de03ed60955679f7ad4c759b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e9625bb1f0b9686aaab137f84dadbee8cae1071a679c76a6a562bb37e71167e\"" Nov 8 00:39:30.790881 containerd[1626]: time="2025-11-08T00:39:30.790754308Z" level=info msg="CreateContainer within sandbox \"9e9625bb1f0b9686aaab137f84dadbee8cae1071a679c76a6a562bb37e71167e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:39:30.795265 containerd[1626]: time="2025-11-08T00:39:30.795229486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-77jcb.gb1.brightbox.com,Uid:af382d47d621604c7b543692a345bff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb3b790a7b7b8d00a58e7218236a6e4e6600194fde24c644c1d108079cebf158\"" Nov 8 00:39:30.798408 containerd[1626]: time="2025-11-08T00:39:30.798297123Z" level=info msg="CreateContainer within sandbox \"fb3b790a7b7b8d00a58e7218236a6e4e6600194fde24c644c1d108079cebf158\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:39:30.799549 containerd[1626]: time="2025-11-08T00:39:30.799494878Z" level=info msg="CreateContainer within sandbox \"3025ac332337083f4f3ad01b5a24fbfb464b17aedd250ccac59aac0267f213ea\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d27a9973ed8b91959e13bbc0aacd678e7be8db5f4bf80cbfdaf9e7396a679779\"" Nov 8 00:39:30.801271 containerd[1626]: time="2025-11-08T00:39:30.801239741Z" level=info msg="StartContainer for \"d27a9973ed8b91959e13bbc0aacd678e7be8db5f4bf80cbfdaf9e7396a679779\"" Nov 8 00:39:30.810693 containerd[1626]: time="2025-11-08T00:39:30.810575738Z" level=info msg="CreateContainer within sandbox \"9e9625bb1f0b9686aaab137f84dadbee8cae1071a679c76a6a562bb37e71167e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86fa450f34791e83cb08a82fdfb4314b3ce4cff90e036fb3d1f82e48f7868e4c\"" Nov 8 00:39:30.811256 containerd[1626]: time="2025-11-08T00:39:30.811160839Z" level=info msg="StartContainer for \"86fa450f34791e83cb08a82fdfb4314b3ce4cff90e036fb3d1f82e48f7868e4c\"" Nov 8 00:39:30.814028 containerd[1626]: time="2025-11-08T00:39:30.813908159Z" level=info msg="CreateContainer within sandbox \"fb3b790a7b7b8d00a58e7218236a6e4e6600194fde24c644c1d108079cebf158\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2231069f6e6c0b82dbe2b53ab576c13bfac2d675d08b981a7c54c8f2093b6ae4\"" Nov 8 00:39:30.816489 containerd[1626]: time="2025-11-08T00:39:30.816456496Z" level=info msg="StartContainer for \"2231069f6e6c0b82dbe2b53ab576c13bfac2d675d08b981a7c54c8f2093b6ae4\"" Nov 8 00:39:30.821489 kubelet[2411]: I1108 00:39:30.821378 2411 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:30.821917 kubelet[2411]: E1108 00:39:30.821839 2411 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.37.190:6443/api/v1/nodes\": dial tcp 10.230.37.190:6443: connect: connection refused" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:30.987734 containerd[1626]: time="2025-11-08T00:39:30.987432208Z" level=info msg="StartContainer for \"86fa450f34791e83cb08a82fdfb4314b3ce4cff90e036fb3d1f82e48f7868e4c\" returns successfully" Nov 8 00:39:31.002575 containerd[1626]: time="2025-11-08T00:39:31.002528414Z" level=info msg="StartContainer for \"2231069f6e6c0b82dbe2b53ab576c13bfac2d675d08b981a7c54c8f2093b6ae4\" returns successfully" Nov 8 00:39:31.010626 containerd[1626]: time="2025-11-08T00:39:31.010588001Z" level=info msg="StartContainer for \"d27a9973ed8b91959e13bbc0aacd678e7be8db5f4bf80cbfdaf9e7396a679779\" returns successfully" Nov 8 00:39:31.173573 kubelet[2411]: E1108 00:39:31.173494 2411 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.37.190:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.37.190:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:39:31.277352 kubelet[2411]: E1108 00:39:31.276517 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:31.280159 kubelet[2411]: E1108 00:39:31.279765 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:31.286276 kubelet[2411]: E1108 00:39:31.284353 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:32.290501 kubelet[2411]: E1108 00:39:32.288906 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:32.290501 kubelet[2411]: E1108 00:39:32.289416 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:32.293384 kubelet[2411]: E1108 00:39:32.293352 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:32.432630 kubelet[2411]: I1108 00:39:32.432446 2411 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.291794 kubelet[2411]: E1108 00:39:33.291580 2411 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.477275 kubelet[2411]: E1108 00:39:33.477216 2411 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-77jcb.gb1.brightbox.com\" not found" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.563622 kubelet[2411]: I1108 00:39:33.563577 2411 kubelet_node_status.go:78] "Successfully registered node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.624185 kubelet[2411]: I1108 00:39:33.624076 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.633241 kubelet[2411]: E1108 00:39:33.633177 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.633241 kubelet[2411]: I1108 00:39:33.633209 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.635214 kubelet[2411]: E1108 00:39:33.635180 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.635214 kubelet[2411]: I1108 00:39:33.635213 2411 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:33.638968 kubelet[2411]: E1108 00:39:33.638929 2411 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-77jcb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:34.198941 kubelet[2411]: I1108 00:39:34.198860 2411 apiserver.go:52] "Watching apiserver" Nov 8 00:39:34.227545 kubelet[2411]: I1108 00:39:34.227472 2411 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:39:35.867575 systemd[1]: Reloading requested from client PID 2702 ('systemctl') (unit session-9.scope)... Nov 8 00:39:35.868095 systemd[1]: Reloading... Nov 8 00:39:35.987169 zram_generator::config[2737]: No configuration found. Nov 8 00:39:36.194628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:39:36.315620 systemd[1]: Reloading finished in 446 ms. Nov 8 00:39:36.364389 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:36.379866 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:39:36.380343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:36.395780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:39:36.691835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:39:36.712793 (kubelet)[2815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:39:36.818918 kubelet[2815]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:39:36.818918 kubelet[2815]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:39:36.818918 kubelet[2815]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:39:36.819641 kubelet[2815]: I1108 00:39:36.819077 2815 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:39:36.831469 kubelet[2815]: I1108 00:39:36.831435 2815 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:39:36.831469 kubelet[2815]: I1108 00:39:36.831464 2815 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:39:36.832022 kubelet[2815]: I1108 00:39:36.831948 2815 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:39:36.837571 kubelet[2815]: I1108 00:39:36.836378 2815 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:39:36.843071 kubelet[2815]: I1108 00:39:36.842821 2815 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:39:36.861032 kubelet[2815]: E1108 00:39:36.860043 2815 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:39:36.861032 kubelet[2815]: I1108 00:39:36.860106 2815 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:39:36.869797 kubelet[2815]: I1108 00:39:36.869449 2815 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:39:36.871343 kubelet[2815]: I1108 00:39:36.871294 2815 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:39:36.871710 kubelet[2815]: I1108 00:39:36.871439 2815 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-77jcb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 8 00:39:36.871949 kubelet[2815]: I1108 00:39:36.871928 2815 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:39:36.872081 kubelet[2815]: I1108 00:39:36.872063 2815 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:39:36.872263 kubelet[2815]: I1108 00:39:36.872243 2815 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:39:36.876122 kubelet[2815]: I1108 00:39:36.876028 2815 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:39:36.879368 kubelet[2815]: I1108 00:39:36.877600 2815 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:39:36.879368 kubelet[2815]: I1108 00:39:36.877644 2815 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:39:36.879368 kubelet[2815]: I1108 00:39:36.877688 2815 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:39:36.882388 kubelet[2815]: I1108 00:39:36.882341 2815 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:39:36.883677 kubelet[2815]: I1108 00:39:36.883013 2815 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:39:36.883677 kubelet[2815]: I1108 00:39:36.883672 2815 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:39:36.883811 kubelet[2815]: I1108 00:39:36.883773 2815 server.go:1287] "Started kubelet" Nov 8 00:39:36.887573 kubelet[2815]: I1108 00:39:36.886167 2815 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:39:36.895203 kubelet[2815]: I1108 00:39:36.895111 2815 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:39:36.897209 kubelet[2815]: I1108 00:39:36.896547 2815 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:39:36.898230 kubelet[2815]: I1108 00:39:36.897945 2815 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:39:36.900152 kubelet[2815]: I1108 00:39:36.899791 2815 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:39:36.900152 kubelet[2815]: I1108 00:39:36.900099 2815 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:39:36.907810 kubelet[2815]: I1108 00:39:36.907561 2815 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:39:36.909761 kubelet[2815]: I1108 00:39:36.909658 2815 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:39:36.909953 kubelet[2815]: I1108 00:39:36.909930 2815 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:39:36.915488 kubelet[2815]: I1108 00:39:36.915455 2815 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:39:36.915644 kubelet[2815]: I1108 00:39:36.915611 2815 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:39:36.919954 kubelet[2815]: I1108 00:39:36.919766 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:39:36.922182 kubelet[2815]: E1108 00:39:36.922120 2815 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:39:36.924341 kubelet[2815]: I1108 00:39:36.922739 2815 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:39:36.924341 kubelet[2815]: I1108 00:39:36.922797 2815 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:39:36.924341 kubelet[2815]: I1108 00:39:36.922833 2815 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:39:36.924341 kubelet[2815]: I1108 00:39:36.922846 2815 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:39:36.924341 kubelet[2815]: I1108 00:39:36.922935 2815 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:39:36.924341 kubelet[2815]: E1108 00:39:36.922935 2815 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:39:37.023574 kubelet[2815]: E1108 00:39:37.023193 2815 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:39:37.024089 kubelet[2815]: I1108 00:39:37.023958 2815 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:39:37.024089 kubelet[2815]: I1108 00:39:37.023980 2815 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:39:37.024089 kubelet[2815]: I1108 00:39:37.024025 2815 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:39:37.024884 kubelet[2815]: I1108 00:39:37.024665 2815 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:39:37.024884 kubelet[2815]: I1108 00:39:37.024689 2815 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:39:37.024884 kubelet[2815]: I1108 00:39:37.024752 2815 policy_none.go:49] "None policy: Start" Nov 8 00:39:37.024884 kubelet[2815]: I1108 00:39:37.024767 2815 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:39:37.025228 kubelet[2815]: I1108 00:39:37.025102 2815 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:39:37.025466 kubelet[2815]: I1108 00:39:37.025444 2815 state_mem.go:75] "Updated machine memory state" Nov 8 00:39:37.031209 kubelet[2815]: I1108 00:39:37.031176 2815 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:39:37.031553 kubelet[2815]: I1108 00:39:37.031531 2815 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:39:37.032021 kubelet[2815]: I1108 00:39:37.031639 2815 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:39:37.035669 kubelet[2815]: I1108 00:39:37.034227 2815 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:39:37.037763 kubelet[2815]: E1108 00:39:37.037734 2815 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:39:37.152229 kubelet[2815]: I1108 00:39:37.151888 2815 kubelet_node_status.go:75] "Attempting to register node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.165032 kubelet[2815]: I1108 00:39:37.164615 2815 kubelet_node_status.go:124] "Node was previously registered" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.165032 kubelet[2815]: I1108 00:39:37.164730 2815 kubelet_node_status.go:78] "Successfully registered node" node="srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.226190 kubelet[2815]: I1108 00:39:37.225603 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.227701 kubelet[2815]: I1108 00:39:37.227043 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.227701 kubelet[2815]: I1108 00:39:37.227353 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.236601 kubelet[2815]: W1108 00:39:37.235815 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:39:37.238786 kubelet[2815]: W1108 00:39:37.238223 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:39:37.240474 kubelet[2815]: W1108 00:39:37.239879 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:39:37.411285 kubelet[2815]: I1108 00:39:37.411229 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5d44044c14c61c5770e31a8ae0295615-kubeconfig\") pod \"kube-scheduler-srv-77jcb.gb1.brightbox.com\" (UID: \"5d44044c14c61c5770e31a8ae0295615\") " pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411424 kubelet[2815]: I1108 00:39:37.411305 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411424 kubelet[2815]: I1108 00:39:37.411341 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-ca-certs\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411562 kubelet[2815]: I1108 00:39:37.411388 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-k8s-certs\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411562 kubelet[2815]: I1108 00:39:37.411478 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411562 kubelet[2815]: I1108 00:39:37.411514 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-ca-certs\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411562 kubelet[2815]: I1108 00:39:37.411549 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af382d47d621604c7b543692a345bff3-k8s-certs\") pod \"kube-apiserver-srv-77jcb.gb1.brightbox.com\" (UID: \"af382d47d621604c7b543692a345bff3\") " pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411747 kubelet[2815]: I1108 00:39:37.411600 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-flexvolume-dir\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.411747 kubelet[2815]: I1108 00:39:37.411643 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d43e12de03ed60955679f7ad4c759b3-kubeconfig\") pod \"kube-controller-manager-srv-77jcb.gb1.brightbox.com\" (UID: \"6d43e12de03ed60955679f7ad4c759b3\") " pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.897118 kubelet[2815]: I1108 00:39:37.896735 2815 apiserver.go:52] "Watching apiserver" Nov 8 00:39:37.910342 kubelet[2815]: I1108 00:39:37.910078 2815 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:39:37.974089 kubelet[2815]: I1108 00:39:37.974050 2815 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:37.981975 kubelet[2815]: W1108 00:39:37.981949 2815 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 00:39:37.982105 kubelet[2815]: E1108 00:39:37.982019 2815 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-77jcb.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" Nov 8 00:39:38.024719 kubelet[2815]: I1108 00:39:38.024584 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-77jcb.gb1.brightbox.com" podStartSLOduration=1.024551989 podStartE2EDuration="1.024551989s" podCreationTimestamp="2025-11-08 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:38.007439897 +0000 UTC m=+1.280954007" watchObservedRunningTime="2025-11-08 00:39:38.024551989 +0000 UTC m=+1.298066105" Nov 8 00:39:38.035336 kubelet[2815]: I1108 00:39:38.035058 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-77jcb.gb1.brightbox.com" podStartSLOduration=1.035042569 podStartE2EDuration="1.035042569s" podCreationTimestamp="2025-11-08 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:38.033969466 +0000 UTC m=+1.307483589" watchObservedRunningTime="2025-11-08 00:39:38.035042569 +0000 UTC m=+1.308556680" Nov 8 00:39:38.035336 kubelet[2815]: I1108 00:39:38.035192 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-77jcb.gb1.brightbox.com" podStartSLOduration=1.03518117 podStartE2EDuration="1.03518117s" podCreationTimestamp="2025-11-08 00:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:38.024965739 +0000 UTC m=+1.298479861" watchObservedRunningTime="2025-11-08 00:39:38.03518117 +0000 UTC m=+1.308695273" Nov 8 00:39:40.133201 kubelet[2815]: I1108 00:39:40.133106 2815 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:39:40.134436 containerd[1626]: time="2025-11-08T00:39:40.134291864Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:39:40.134967 kubelet[2815]: I1108 00:39:40.134669 2815 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:39:41.135564 kubelet[2815]: I1108 00:39:41.135265 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38d28967-f825-4a36-af1e-0ad30c6c4001-lib-modules\") pod \"kube-proxy-t5z8h\" (UID: \"38d28967-f825-4a36-af1e-0ad30c6c4001\") " pod="kube-system/kube-proxy-t5z8h" Nov 8 00:39:41.135564 kubelet[2815]: I1108 00:39:41.135351 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkrbf\" (UniqueName: \"kubernetes.io/projected/38d28967-f825-4a36-af1e-0ad30c6c4001-kube-api-access-gkrbf\") pod \"kube-proxy-t5z8h\" (UID: \"38d28967-f825-4a36-af1e-0ad30c6c4001\") " pod="kube-system/kube-proxy-t5z8h" Nov 8 00:39:41.135564 kubelet[2815]: I1108 00:39:41.135415 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/38d28967-f825-4a36-af1e-0ad30c6c4001-kube-proxy\") pod \"kube-proxy-t5z8h\" (UID: \"38d28967-f825-4a36-af1e-0ad30c6c4001\") " pod="kube-system/kube-proxy-t5z8h" Nov 8 00:39:41.135564 kubelet[2815]: I1108 00:39:41.135448 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38d28967-f825-4a36-af1e-0ad30c6c4001-xtables-lock\") pod \"kube-proxy-t5z8h\" (UID: \"38d28967-f825-4a36-af1e-0ad30c6c4001\") " pod="kube-system/kube-proxy-t5z8h" Nov 8 00:39:41.238624 kubelet[2815]: I1108 00:39:41.235927 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xbnt\" (UniqueName: \"kubernetes.io/projected/293a1b29-e507-4ab2-8d32-0ff787354de9-kube-api-access-6xbnt\") pod \"tigera-operator-7dcd859c48-crt78\" (UID: \"293a1b29-e507-4ab2-8d32-0ff787354de9\") " pod="tigera-operator/tigera-operator-7dcd859c48-crt78" Nov 8 00:39:41.238624 kubelet[2815]: I1108 00:39:41.236022 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/293a1b29-e507-4ab2-8d32-0ff787354de9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-crt78\" (UID: \"293a1b29-e507-4ab2-8d32-0ff787354de9\") " pod="tigera-operator/tigera-operator-7dcd859c48-crt78" Nov 8 00:39:41.348200 containerd[1626]: time="2025-11-08T00:39:41.346514112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5z8h,Uid:38d28967-f825-4a36-af1e-0ad30c6c4001,Namespace:kube-system,Attempt:0,}" Nov 8 00:39:41.413479 containerd[1626]: time="2025-11-08T00:39:41.412479202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:41.413479 containerd[1626]: time="2025-11-08T00:39:41.412646416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:41.413479 containerd[1626]: time="2025-11-08T00:39:41.412700410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:41.413479 containerd[1626]: time="2025-11-08T00:39:41.412882089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:41.476515 containerd[1626]: time="2025-11-08T00:39:41.476424798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5z8h,Uid:38d28967-f825-4a36-af1e-0ad30c6c4001,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9cacc66983eb90f8ae4e22c334ea436e600cc6033e6a35e093d251f4369bdfb\"" Nov 8 00:39:41.481582 containerd[1626]: time="2025-11-08T00:39:41.481512105Z" level=info msg="CreateContainer within sandbox \"d9cacc66983eb90f8ae4e22c334ea436e600cc6033e6a35e093d251f4369bdfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:39:41.491790 containerd[1626]: time="2025-11-08T00:39:41.491740495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crt78,Uid:293a1b29-e507-4ab2-8d32-0ff787354de9,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:39:41.504393 containerd[1626]: time="2025-11-08T00:39:41.504343950Z" level=info msg="CreateContainer within sandbox \"d9cacc66983eb90f8ae4e22c334ea436e600cc6033e6a35e093d251f4369bdfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bee0fc14c70ed36025d30e9703466008644308dda3b1972abf619a89aa100853\"" Nov 8 00:39:41.508373 containerd[1626]: time="2025-11-08T00:39:41.506732206Z" level=info msg="StartContainer for \"bee0fc14c70ed36025d30e9703466008644308dda3b1972abf619a89aa100853\"" Nov 8 00:39:41.540565 containerd[1626]: time="2025-11-08T00:39:41.540113364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:41.540565 containerd[1626]: time="2025-11-08T00:39:41.540221393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:41.540565 containerd[1626]: time="2025-11-08T00:39:41.540259530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:41.540565 containerd[1626]: time="2025-11-08T00:39:41.540406394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:41.617426 containerd[1626]: time="2025-11-08T00:39:41.617374742Z" level=info msg="StartContainer for \"bee0fc14c70ed36025d30e9703466008644308dda3b1972abf619a89aa100853\" returns successfully" Nov 8 00:39:41.662424 containerd[1626]: time="2025-11-08T00:39:41.662212626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crt78,Uid:293a1b29-e507-4ab2-8d32-0ff787354de9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"86b23e9646aed5b078a4a57fff71ec1ea1808c58504925b925aec05f45cd2560\"" Nov 8 00:39:41.667330 containerd[1626]: time="2025-11-08T00:39:41.666863691Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:39:42.006737 kubelet[2815]: I1108 00:39:42.006201 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5z8h" podStartSLOduration=1.006180677 podStartE2EDuration="1.006180677s" podCreationTimestamp="2025-11-08 00:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:39:42.005019001 +0000 UTC m=+5.278533123" watchObservedRunningTime="2025-11-08 00:39:42.006180677 +0000 UTC m=+5.279694790" Nov 8 00:39:43.468400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045868777.mount: Deactivated successfully. Nov 8 00:39:44.561999 containerd[1626]: time="2025-11-08T00:39:44.561922310Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:44.564004 containerd[1626]: time="2025-11-08T00:39:44.563948585Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:39:44.565080 containerd[1626]: time="2025-11-08T00:39:44.565035486Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:44.568841 containerd[1626]: time="2025-11-08T00:39:44.568786510Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:39:44.573566 containerd[1626]: time="2025-11-08T00:39:44.573473519Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.906502121s" Nov 8 00:39:44.573566 containerd[1626]: time="2025-11-08T00:39:44.573549529Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:39:44.577744 containerd[1626]: time="2025-11-08T00:39:44.577008510Z" level=info msg="CreateContainer within sandbox \"86b23e9646aed5b078a4a57fff71ec1ea1808c58504925b925aec05f45cd2560\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:39:44.593526 containerd[1626]: time="2025-11-08T00:39:44.593484802Z" level=info msg="CreateContainer within sandbox \"86b23e9646aed5b078a4a57fff71ec1ea1808c58504925b925aec05f45cd2560\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"953c0dc6dd5da8f5136dd974468cf30f2d74a92b8856ea5954a76b43d8135f2b\"" Nov 8 00:39:44.595858 containerd[1626]: time="2025-11-08T00:39:44.595826054Z" level=info msg="StartContainer for \"953c0dc6dd5da8f5136dd974468cf30f2d74a92b8856ea5954a76b43d8135f2b\"" Nov 8 00:39:44.679813 containerd[1626]: time="2025-11-08T00:39:44.679662841Z" level=info msg="StartContainer for \"953c0dc6dd5da8f5136dd974468cf30f2d74a92b8856ea5954a76b43d8135f2b\" returns successfully" Nov 8 00:39:47.407704 kubelet[2815]: I1108 00:39:47.407283 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-crt78" podStartSLOduration=3.497104096 podStartE2EDuration="6.407260752s" podCreationTimestamp="2025-11-08 00:39:41 +0000 UTC" firstStartedPulling="2025-11-08 00:39:41.664256632 +0000 UTC m=+4.937770735" lastFinishedPulling="2025-11-08 00:39:44.574413294 +0000 UTC m=+7.847927391" observedRunningTime="2025-11-08 00:39:45.020327203 +0000 UTC m=+8.293841324" watchObservedRunningTime="2025-11-08 00:39:47.407260752 +0000 UTC m=+10.680774858" Nov 8 00:39:52.217004 sudo[1884]: pam_unix(sudo:session): session closed for user root Nov 8 00:39:52.372811 sshd[1880]: pam_unix(sshd:session): session closed for user core Nov 8 00:39:52.384533 systemd[1]: sshd@6-10.230.37.190:22-139.178.68.195:43960.service: Deactivated successfully. Nov 8 00:39:52.394964 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:39:52.397205 systemd-logind[1595]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:39:52.402290 systemd-logind[1595]: Removed session 9. Nov 8 00:39:58.795413 kubelet[2815]: I1108 00:39:58.795339 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxsl2\" (UniqueName: \"kubernetes.io/projected/d17e8806-23ae-4ebf-aa67-c4e316e0ad15-kube-api-access-cxsl2\") pod \"calico-typha-86d9d7f89d-9h9wp\" (UID: \"d17e8806-23ae-4ebf-aa67-c4e316e0ad15\") " pod="calico-system/calico-typha-86d9d7f89d-9h9wp" Nov 8 00:39:58.795413 kubelet[2815]: I1108 00:39:58.795424 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d17e8806-23ae-4ebf-aa67-c4e316e0ad15-tigera-ca-bundle\") pod \"calico-typha-86d9d7f89d-9h9wp\" (UID: \"d17e8806-23ae-4ebf-aa67-c4e316e0ad15\") " pod="calico-system/calico-typha-86d9d7f89d-9h9wp" Nov 8 00:39:58.797111 kubelet[2815]: I1108 00:39:58.795458 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d17e8806-23ae-4ebf-aa67-c4e316e0ad15-typha-certs\") pod \"calico-typha-86d9d7f89d-9h9wp\" (UID: \"d17e8806-23ae-4ebf-aa67-c4e316e0ad15\") " pod="calico-system/calico-typha-86d9d7f89d-9h9wp" Nov 8 00:39:58.985516 containerd[1626]: time="2025-11-08T00:39:58.985441042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86d9d7f89d-9h9wp,Uid:d17e8806-23ae-4ebf-aa67-c4e316e0ad15,Namespace:calico-system,Attempt:0,}" Nov 8 00:39:58.997514 kubelet[2815]: I1108 00:39:58.996594 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-cni-net-dir\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997514 kubelet[2815]: I1108 00:39:58.996646 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-flexvol-driver-host\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997514 kubelet[2815]: I1108 00:39:58.996681 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/390a1c68-66b0-4ba5-847f-d8d370db147c-node-certs\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997514 kubelet[2815]: I1108 00:39:58.996710 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-cni-bin-dir\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997514 kubelet[2815]: I1108 00:39:58.996736 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-lib-modules\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997836 kubelet[2815]: I1108 00:39:58.996766 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cr57\" (UniqueName: \"kubernetes.io/projected/390a1c68-66b0-4ba5-847f-d8d370db147c-kube-api-access-9cr57\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997836 kubelet[2815]: I1108 00:39:58.996814 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-cni-log-dir\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997836 kubelet[2815]: I1108 00:39:58.996840 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-policysync\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997836 kubelet[2815]: I1108 00:39:58.996866 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-var-lib-calico\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.997836 kubelet[2815]: I1108 00:39:58.996896 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/390a1c68-66b0-4ba5-847f-d8d370db147c-tigera-ca-bundle\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.998055 kubelet[2815]: I1108 00:39:58.996937 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-var-run-calico\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:58.998055 kubelet[2815]: I1108 00:39:58.996962 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/390a1c68-66b0-4ba5-847f-d8d370db147c-xtables-lock\") pod \"calico-node-9vqf9\" (UID: \"390a1c68-66b0-4ba5-847f-d8d370db147c\") " pod="calico-system/calico-node-9vqf9" Nov 8 00:39:59.026909 containerd[1626]: time="2025-11-08T00:39:59.026735926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:59.027062 containerd[1626]: time="2025-11-08T00:39:59.026903452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:59.027062 containerd[1626]: time="2025-11-08T00:39:59.026931994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:59.027322 containerd[1626]: time="2025-11-08T00:39:59.027075524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:59.091469 kubelet[2815]: E1108 00:39:59.091415 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:39:59.125232 kubelet[2815]: E1108 00:39:59.125117 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.126928 kubelet[2815]: W1108 00:39:59.125388 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.127757 kubelet[2815]: E1108 00:39:59.127726 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.127757 kubelet[2815]: W1108 00:39:59.127751 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.128559 kubelet[2815]: E1108 00:39:59.128215 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.128636 kubelet[2815]: E1108 00:39:59.128578 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.128636 kubelet[2815]: W1108 00:39:59.128604 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.128636 kubelet[2815]: E1108 00:39:59.128621 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.130432 kubelet[2815]: E1108 00:39:59.128903 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.130432 kubelet[2815]: W1108 00:39:59.128924 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.130432 kubelet[2815]: E1108 00:39:59.128941 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.130432 kubelet[2815]: E1108 00:39:59.129518 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.133416 kubelet[2815]: E1108 00:39:59.130631 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.133416 kubelet[2815]: W1108 00:39:59.130868 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.133416 kubelet[2815]: E1108 00:39:59.131024 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.133416 kubelet[2815]: E1108 00:39:59.132048 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.133416 kubelet[2815]: W1108 00:39:59.132066 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.133416 kubelet[2815]: E1108 00:39:59.132547 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.133771 kubelet[2815]: E1108 00:39:59.133520 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.134557 kubelet[2815]: W1108 00:39:59.133537 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.134557 kubelet[2815]: E1108 00:39:59.134469 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.136638 kubelet[2815]: E1108 00:39:59.136615 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.136638 kubelet[2815]: W1108 00:39:59.136638 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.136771 kubelet[2815]: E1108 00:39:59.136657 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.137982 kubelet[2815]: E1108 00:39:59.137932 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.137982 kubelet[2815]: W1108 00:39:59.137957 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.137982 kubelet[2815]: E1108 00:39:59.137975 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.139554 kubelet[2815]: E1108 00:39:59.139194 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.139554 kubelet[2815]: W1108 00:39:59.139217 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.139554 kubelet[2815]: E1108 00:39:59.139246 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.155483 kubelet[2815]: E1108 00:39:59.154942 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.155483 kubelet[2815]: W1108 00:39:59.154967 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.155483 kubelet[2815]: E1108 00:39:59.154989 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.157456 kubelet[2815]: E1108 00:39:59.156660 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.157456 kubelet[2815]: W1108 00:39:59.156682 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.157456 kubelet[2815]: E1108 00:39:59.156701 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.160514 kubelet[2815]: E1108 00:39:59.160032 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.160514 kubelet[2815]: W1108 00:39:59.160054 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.160514 kubelet[2815]: E1108 00:39:59.160071 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.163747 kubelet[2815]: E1108 00:39:59.163467 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.163747 kubelet[2815]: W1108 00:39:59.163717 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.163747 kubelet[2815]: E1108 00:39:59.163743 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.167205 kubelet[2815]: E1108 00:39:59.165973 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.167205 kubelet[2815]: W1108 00:39:59.165994 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.167205 kubelet[2815]: E1108 00:39:59.166024 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.167205 kubelet[2815]: E1108 00:39:59.166359 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.167205 kubelet[2815]: W1108 00:39:59.166529 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.167205 kubelet[2815]: E1108 00:39:59.166548 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.167703 kubelet[2815]: E1108 00:39:59.167398 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.167703 kubelet[2815]: W1108 00:39:59.167426 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.167703 kubelet[2815]: E1108 00:39:59.167501 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.170230 kubelet[2815]: E1108 00:39:59.168435 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.170230 kubelet[2815]: W1108 00:39:59.168515 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.170230 kubelet[2815]: E1108 00:39:59.168548 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.170230 kubelet[2815]: E1108 00:39:59.169947 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.170230 kubelet[2815]: W1108 00:39:59.169974 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.170230 kubelet[2815]: E1108 00:39:59.169990 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.170650 kubelet[2815]: E1108 00:39:59.170629 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.170708 kubelet[2815]: W1108 00:39:59.170650 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.170759 kubelet[2815]: E1108 00:39:59.170718 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.172061 kubelet[2815]: E1108 00:39:59.171420 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.172061 kubelet[2815]: W1108 00:39:59.171448 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.172061 kubelet[2815]: E1108 00:39:59.171465 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.172061 kubelet[2815]: E1108 00:39:59.171729 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.172061 kubelet[2815]: W1108 00:39:59.171743 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.172061 kubelet[2815]: E1108 00:39:59.171762 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.173040 kubelet[2815]: E1108 00:39:59.172587 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.173040 kubelet[2815]: W1108 00:39:59.172610 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.173040 kubelet[2815]: E1108 00:39:59.172627 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.173040 kubelet[2815]: E1108 00:39:59.173037 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.174482 kubelet[2815]: W1108 00:39:59.173052 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.174482 kubelet[2815]: E1108 00:39:59.173189 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.174948 kubelet[2815]: E1108 00:39:59.174594 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.174948 kubelet[2815]: W1108 00:39:59.174609 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.174948 kubelet[2815]: E1108 00:39:59.174625 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.176302 kubelet[2815]: E1108 00:39:59.175257 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.176302 kubelet[2815]: W1108 00:39:59.175278 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.176302 kubelet[2815]: E1108 00:39:59.175306 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.176302 kubelet[2815]: E1108 00:39:59.176154 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.176302 kubelet[2815]: W1108 00:39:59.176183 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.176302 kubelet[2815]: E1108 00:39:59.176199 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.178199 kubelet[2815]: E1108 00:39:59.177881 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.178199 kubelet[2815]: W1108 00:39:59.177902 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.178199 kubelet[2815]: E1108 00:39:59.177919 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.181150 kubelet[2815]: E1108 00:39:59.179343 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.181150 kubelet[2815]: W1108 00:39:59.179364 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.181150 kubelet[2815]: E1108 00:39:59.179382 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.181150 kubelet[2815]: E1108 00:39:59.179664 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.181150 kubelet[2815]: W1108 00:39:59.179678 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.181150 kubelet[2815]: E1108 00:39:59.179694 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.195931 containerd[1626]: time="2025-11-08T00:39:59.193334589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vqf9,Uid:390a1c68-66b0-4ba5-847f-d8d370db147c,Namespace:calico-system,Attempt:0,}" Nov 8 00:39:59.201833 kubelet[2815]: E1108 00:39:59.201799 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.201833 kubelet[2815]: W1108 00:39:59.201831 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.201833 kubelet[2815]: E1108 00:39:59.201858 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.202942 kubelet[2815]: I1108 00:39:59.201901 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/732940a2-6d95-4610-b476-89508bce10b7-kubelet-dir\") pod \"csi-node-driver-frtm6\" (UID: \"732940a2-6d95-4610-b476-89508bce10b7\") " pod="calico-system/csi-node-driver-frtm6" Nov 8 00:39:59.203446 kubelet[2815]: E1108 00:39:59.203417 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.203446 kubelet[2815]: W1108 00:39:59.203445 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.205611 kubelet[2815]: E1108 00:39:59.203508 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.205611 kubelet[2815]: I1108 00:39:59.203541 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/732940a2-6d95-4610-b476-89508bce10b7-varrun\") pod \"csi-node-driver-frtm6\" (UID: \"732940a2-6d95-4610-b476-89508bce10b7\") " pod="calico-system/csi-node-driver-frtm6" Nov 8 00:39:59.206705 kubelet[2815]: E1108 00:39:59.206679 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.206705 kubelet[2815]: W1108 00:39:59.206703 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.208240 kubelet[2815]: E1108 00:39:59.206734 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.208240 kubelet[2815]: E1108 00:39:59.207421 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.208240 kubelet[2815]: W1108 00:39:59.207435 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.208411 kubelet[2815]: E1108 00:39:59.208198 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.212961 kubelet[2815]: E1108 00:39:59.212438 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.212961 kubelet[2815]: W1108 00:39:59.212497 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.212961 kubelet[2815]: E1108 00:39:59.212560 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.212961 kubelet[2815]: I1108 00:39:59.212611 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crnk6\" (UniqueName: \"kubernetes.io/projected/732940a2-6d95-4610-b476-89508bce10b7-kube-api-access-crnk6\") pod \"csi-node-driver-frtm6\" (UID: \"732940a2-6d95-4610-b476-89508bce10b7\") " pod="calico-system/csi-node-driver-frtm6" Nov 8 00:39:59.212961 kubelet[2815]: E1108 00:39:59.212809 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.212961 kubelet[2815]: W1108 00:39:59.212825 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.212961 kubelet[2815]: E1108 00:39:59.212852 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.213642 kubelet[2815]: E1108 00:39:59.213521 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.213642 kubelet[2815]: W1108 00:39:59.213536 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.213642 kubelet[2815]: E1108 00:39:59.213561 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.215341 kubelet[2815]: E1108 00:39:59.215309 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.215341 kubelet[2815]: W1108 00:39:59.215335 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.215500 kubelet[2815]: E1108 00:39:59.215361 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.215500 kubelet[2815]: I1108 00:39:59.215396 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/732940a2-6d95-4610-b476-89508bce10b7-registration-dir\") pod \"csi-node-driver-frtm6\" (UID: \"732940a2-6d95-4610-b476-89508bce10b7\") " pod="calico-system/csi-node-driver-frtm6" Nov 8 00:39:59.218211 kubelet[2815]: E1108 00:39:59.217843 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.218211 kubelet[2815]: W1108 00:39:59.217870 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.218211 kubelet[2815]: E1108 00:39:59.217915 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.218211 kubelet[2815]: I1108 00:39:59.217956 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/732940a2-6d95-4610-b476-89508bce10b7-socket-dir\") pod \"csi-node-driver-frtm6\" (UID: \"732940a2-6d95-4610-b476-89508bce10b7\") " pod="calico-system/csi-node-driver-frtm6" Nov 8 00:39:59.220653 kubelet[2815]: E1108 00:39:59.220555 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.220653 kubelet[2815]: W1108 00:39:59.220579 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.220653 kubelet[2815]: E1108 00:39:59.220618 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.221951 kubelet[2815]: E1108 00:39:59.221922 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.221951 kubelet[2815]: W1108 00:39:59.221946 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.222335 kubelet[2815]: E1108 00:39:59.221986 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.224883 kubelet[2815]: E1108 00:39:59.224093 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.224883 kubelet[2815]: W1108 00:39:59.224117 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.225068 kubelet[2815]: E1108 00:39:59.225033 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.226138 kubelet[2815]: E1108 00:39:59.226103 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.226217 kubelet[2815]: W1108 00:39:59.226152 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.226217 kubelet[2815]: E1108 00:39:59.226174 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.228545 kubelet[2815]: E1108 00:39:59.228516 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.228545 kubelet[2815]: W1108 00:39:59.228539 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.228740 kubelet[2815]: E1108 00:39:59.228557 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.230591 kubelet[2815]: E1108 00:39:59.230286 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.230591 kubelet[2815]: W1108 00:39:59.230309 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.230591 kubelet[2815]: E1108 00:39:59.230327 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.248904 containerd[1626]: time="2025-11-08T00:39:59.248509770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:39:59.249313 containerd[1626]: time="2025-11-08T00:39:59.248635078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:39:59.249313 containerd[1626]: time="2025-11-08T00:39:59.248667864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:59.249313 containerd[1626]: time="2025-11-08T00:39:59.248820101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:39:59.288965 containerd[1626]: time="2025-11-08T00:39:59.288593375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86d9d7f89d-9h9wp,Uid:d17e8806-23ae-4ebf-aa67-c4e316e0ad15,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad7f3e835ca163514b8cbee38e89085da67f2279b9edb9e8a0d40851a12b535f\"" Nov 8 00:39:59.301515 containerd[1626]: time="2025-11-08T00:39:59.301369852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:39:59.320449 kubelet[2815]: E1108 00:39:59.318718 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.320449 kubelet[2815]: W1108 00:39:59.318916 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.320449 kubelet[2815]: E1108 00:39:59.318980 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.320449 kubelet[2815]: E1108 00:39:59.319800 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.320449 kubelet[2815]: W1108 00:39:59.319819 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.320449 kubelet[2815]: E1108 00:39:59.319858 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.320825 kubelet[2815]: E1108 00:39:59.320809 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.320894 kubelet[2815]: W1108 00:39:59.320824 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.320954 kubelet[2815]: E1108 00:39:59.320889 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.321818 kubelet[2815]: E1108 00:39:59.321788 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.321818 kubelet[2815]: W1108 00:39:59.321810 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.321952 kubelet[2815]: E1108 00:39:59.321835 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.322888 kubelet[2815]: E1108 00:39:59.322451 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.322888 kubelet[2815]: W1108 00:39:59.322744 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.323146 kubelet[2815]: E1108 00:39:59.323044 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.323146 kubelet[2815]: E1108 00:39:59.323069 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.323146 kubelet[2815]: W1108 00:39:59.323084 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.323406 kubelet[2815]: E1108 00:39:59.323370 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.323865 kubelet[2815]: E1108 00:39:59.323842 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.323865 kubelet[2815]: W1108 00:39:59.323862 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.324031 kubelet[2815]: E1108 00:39:59.323994 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.324627 kubelet[2815]: E1108 00:39:59.324588 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.324627 kubelet[2815]: W1108 00:39:59.324627 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.324839 kubelet[2815]: E1108 00:39:59.324675 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.325354 kubelet[2815]: E1108 00:39:59.325256 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.325354 kubelet[2815]: W1108 00:39:59.325287 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.326174 kubelet[2815]: E1108 00:39:59.325506 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.326174 kubelet[2815]: E1108 00:39:59.325656 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.326174 kubelet[2815]: W1108 00:39:59.325671 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.326174 kubelet[2815]: E1108 00:39:59.325780 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.326174 kubelet[2815]: E1108 00:39:59.326050 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.326174 kubelet[2815]: W1108 00:39:59.326064 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.326174 kubelet[2815]: E1108 00:39:59.326164 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.326548 kubelet[2815]: E1108 00:39:59.326434 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.326548 kubelet[2815]: W1108 00:39:59.326448 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.326640 kubelet[2815]: E1108 00:39:59.326573 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.326967 kubelet[2815]: E1108 00:39:59.326774 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.326967 kubelet[2815]: W1108 00:39:59.326810 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.326967 kubelet[2815]: E1108 00:39:59.326897 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.327277 kubelet[2815]: E1108 00:39:59.327178 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.327277 kubelet[2815]: W1108 00:39:59.327197 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.327398 kubelet[2815]: E1108 00:39:59.327334 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.327913 kubelet[2815]: E1108 00:39:59.327523 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.327913 kubelet[2815]: W1108 00:39:59.327543 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.327913 kubelet[2815]: E1108 00:39:59.327582 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.327913 kubelet[2815]: E1108 00:39:59.327804 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.327913 kubelet[2815]: W1108 00:39:59.327817 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.327913 kubelet[2815]: E1108 00:39:59.327853 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.328226 kubelet[2815]: E1108 00:39:59.328071 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.328226 kubelet[2815]: W1108 00:39:59.328086 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.328226 kubelet[2815]: E1108 00:39:59.328165 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.328430 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.329960 kubelet[2815]: W1108 00:39:59.328444 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.328577 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.328768 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.329960 kubelet[2815]: W1108 00:39:59.328781 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.328878 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.329071 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.329960 kubelet[2815]: W1108 00:39:59.329085 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.329276 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.329960 kubelet[2815]: E1108 00:39:59.329526 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.330461 kubelet[2815]: W1108 00:39:59.329541 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.330461 kubelet[2815]: E1108 00:39:59.329878 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.330461 kubelet[2815]: E1108 00:39:59.330327 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.330461 kubelet[2815]: W1108 00:39:59.330341 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.330784 kubelet[2815]: E1108 00:39:59.330715 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.330898 kubelet[2815]: E1108 00:39:59.330872 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.330898 kubelet[2815]: W1108 00:39:59.330892 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.331501 kubelet[2815]: E1108 00:39:59.331052 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.331501 kubelet[2815]: E1108 00:39:59.331309 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.331501 kubelet[2815]: W1108 00:39:59.331323 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.331501 kubelet[2815]: E1108 00:39:59.331444 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.332686 kubelet[2815]: E1108 00:39:59.332635 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.332686 kubelet[2815]: W1108 00:39:59.332659 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.332686 kubelet[2815]: E1108 00:39:59.332678 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:39:59.346840 containerd[1626]: time="2025-11-08T00:39:59.346647370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9vqf9,Uid:390a1c68-66b0-4ba5-847f-d8d370db147c,Namespace:calico-system,Attempt:0,} returns sandbox id \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\"" Nov 8 00:39:59.351755 kubelet[2815]: E1108 00:39:59.351724 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:39:59.351755 kubelet[2815]: W1108 00:39:59.351752 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:39:59.351906 kubelet[2815]: E1108 00:39:59.351780 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:00.930385 kubelet[2815]: E1108 00:40:00.929671 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:00.950702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590392437.mount: Deactivated successfully. Nov 8 00:40:02.827590 containerd[1626]: time="2025-11-08T00:40:02.826400583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:02.827590 containerd[1626]: time="2025-11-08T00:40:02.827545209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:40:02.828912 containerd[1626]: time="2025-11-08T00:40:02.828849574Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:02.853084 containerd[1626]: time="2025-11-08T00:40:02.852997336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:02.863241 containerd[1626]: time="2025-11-08T00:40:02.862941489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.561496663s" Nov 8 00:40:02.863241 containerd[1626]: time="2025-11-08T00:40:02.863016252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:40:02.871303 containerd[1626]: time="2025-11-08T00:40:02.871226500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:40:02.931927 containerd[1626]: time="2025-11-08T00:40:02.930364568Z" level=info msg="CreateContainer within sandbox \"ad7f3e835ca163514b8cbee38e89085da67f2279b9edb9e8a0d40851a12b535f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:40:02.936575 kubelet[2815]: E1108 00:40:02.936361 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:02.953518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205246103.mount: Deactivated successfully. Nov 8 00:40:02.958049 containerd[1626]: time="2025-11-08T00:40:02.957962528Z" level=info msg="CreateContainer within sandbox \"ad7f3e835ca163514b8cbee38e89085da67f2279b9edb9e8a0d40851a12b535f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"289d900a5744a9ea69d336bd95b895bbec7664601d1c793f14848e1590de6638\"" Nov 8 00:40:02.966467 containerd[1626]: time="2025-11-08T00:40:02.965608354Z" level=info msg="StartContainer for \"289d900a5744a9ea69d336bd95b895bbec7664601d1c793f14848e1590de6638\"" Nov 8 00:40:03.114257 containerd[1626]: time="2025-11-08T00:40:03.112996496Z" level=info msg="StartContainer for \"289d900a5744a9ea69d336bd95b895bbec7664601d1c793f14848e1590de6638\" returns successfully" Nov 8 00:40:03.657928 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 8 00:40:03.651306 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 8 00:40:03.651414 systemd-resolved[1517]: Flushed all caches. Nov 8 00:40:04.113781 kubelet[2815]: E1108 00:40:04.113085 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.113781 kubelet[2815]: W1108 00:40:04.113158 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.114605 kubelet[2815]: E1108 00:40:04.114175 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.114605 kubelet[2815]: E1108 00:40:04.114461 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.114605 kubelet[2815]: W1108 00:40:04.114480 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.114605 kubelet[2815]: E1108 00:40:04.114497 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.115475 kubelet[2815]: E1108 00:40:04.114782 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.115475 kubelet[2815]: W1108 00:40:04.114805 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.115475 kubelet[2815]: E1108 00:40:04.114823 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.128625 kubelet[2815]: E1108 00:40:04.128585 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.128625 kubelet[2815]: W1108 00:40:04.128617 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.129536 kubelet[2815]: E1108 00:40:04.128644 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.129536 kubelet[2815]: E1108 00:40:04.128941 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.129536 kubelet[2815]: W1108 00:40:04.128956 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.129536 kubelet[2815]: E1108 00:40:04.128972 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.129536 kubelet[2815]: E1108 00:40:04.129270 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.129536 kubelet[2815]: W1108 00:40:04.129285 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.129536 kubelet[2815]: E1108 00:40:04.129302 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.129854 kubelet[2815]: E1108 00:40:04.129595 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.129854 kubelet[2815]: W1108 00:40:04.129610 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.129854 kubelet[2815]: E1108 00:40:04.129628 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.129994 kubelet[2815]: E1108 00:40:04.129893 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.129994 kubelet[2815]: W1108 00:40:04.129907 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.129994 kubelet[2815]: E1108 00:40:04.129922 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.136367 kubelet[2815]: E1108 00:40:04.136213 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.136367 kubelet[2815]: W1108 00:40:04.136244 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.136367 kubelet[2815]: E1108 00:40:04.136267 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.136838 kubelet[2815]: E1108 00:40:04.136670 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.136838 kubelet[2815]: W1108 00:40:04.136702 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.136838 kubelet[2815]: E1108 00:40:04.136720 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.137905 kubelet[2815]: E1108 00:40:04.137881 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.137905 kubelet[2815]: W1108 00:40:04.137903 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.138217 kubelet[2815]: E1108 00:40:04.137921 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.138311 kubelet[2815]: E1108 00:40:04.138297 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.138522 kubelet[2815]: W1108 00:40:04.138312 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.138522 kubelet[2815]: E1108 00:40:04.138328 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.138681 kubelet[2815]: E1108 00:40:04.138647 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.138681 kubelet[2815]: W1108 00:40:04.138670 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.138794 kubelet[2815]: E1108 00:40:04.138688 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.138997 kubelet[2815]: E1108 00:40:04.138959 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.138997 kubelet[2815]: W1108 00:40:04.138984 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.139167 kubelet[2815]: E1108 00:40:04.139001 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.139316 kubelet[2815]: E1108 00:40:04.139296 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.139316 kubelet[2815]: W1108 00:40:04.139318 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.139561 kubelet[2815]: E1108 00:40:04.139337 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.156252 kubelet[2815]: I1108 00:40:04.156138 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86d9d7f89d-9h9wp" podStartSLOduration=2.585850799 podStartE2EDuration="6.156088825s" podCreationTimestamp="2025-11-08 00:39:58 +0000 UTC" firstStartedPulling="2025-11-08 00:39:59.299543439 +0000 UTC m=+22.573057543" lastFinishedPulling="2025-11-08 00:40:02.869781458 +0000 UTC m=+26.143295569" observedRunningTime="2025-11-08 00:40:04.148288468 +0000 UTC m=+27.421802592" watchObservedRunningTime="2025-11-08 00:40:04.156088825 +0000 UTC m=+27.429602936" Nov 8 00:40:04.162115 kubelet[2815]: E1108 00:40:04.161592 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.162115 kubelet[2815]: W1108 00:40:04.161624 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.162115 kubelet[2815]: E1108 00:40:04.161652 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.162450 kubelet[2815]: E1108 00:40:04.162429 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.162555 kubelet[2815]: W1108 00:40:04.162534 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.162747 kubelet[2815]: E1108 00:40:04.162665 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.163076 kubelet[2815]: E1108 00:40:04.163039 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.163076 kubelet[2815]: W1108 00:40:04.163065 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.163416 kubelet[2815]: E1108 00:40:04.163093 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.163416 kubelet[2815]: E1108 00:40:04.163404 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.163416 kubelet[2815]: W1108 00:40:04.163420 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.163910 kubelet[2815]: E1108 00:40:04.163435 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.164196 kubelet[2815]: E1108 00:40:04.164175 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.164196 kubelet[2815]: W1108 00:40:04.164195 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.164450 kubelet[2815]: E1108 00:40:04.164326 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.164516 kubelet[2815]: E1108 00:40:04.164506 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.164561 kubelet[2815]: W1108 00:40:04.164521 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.164844 kubelet[2815]: E1108 00:40:04.164810 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.164844 kubelet[2815]: W1108 00:40:04.164834 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.164972 kubelet[2815]: E1108 00:40:04.164850 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.166078 kubelet[2815]: E1108 00:40:04.166015 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.166078 kubelet[2815]: W1108 00:40:04.166040 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.166078 kubelet[2815]: E1108 00:40:04.166057 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.167268 kubelet[2815]: E1108 00:40:04.167237 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.167268 kubelet[2815]: W1108 00:40:04.167264 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.167396 kubelet[2815]: E1108 00:40:04.167282 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.168926 kubelet[2815]: E1108 00:40:04.168801 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.169027 kubelet[2815]: E1108 00:40:04.168961 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.169027 kubelet[2815]: W1108 00:40:04.168976 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.169027 kubelet[2815]: E1108 00:40:04.169004 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.169907 kubelet[2815]: E1108 00:40:04.169617 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.169907 kubelet[2815]: W1108 00:40:04.169639 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.169907 kubelet[2815]: E1108 00:40:04.169670 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.170282 kubelet[2815]: E1108 00:40:04.170261 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.170282 kubelet[2815]: W1108 00:40:04.170281 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.170406 kubelet[2815]: E1108 00:40:04.170305 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.170618 kubelet[2815]: E1108 00:40:04.170593 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.170618 kubelet[2815]: W1108 00:40:04.170607 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.170764 kubelet[2815]: E1108 00:40:04.170741 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.171013 kubelet[2815]: E1108 00:40:04.170994 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.171013 kubelet[2815]: W1108 00:40:04.171012 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.171272 kubelet[2815]: E1108 00:40:04.171147 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.171355 kubelet[2815]: E1108 00:40:04.171302 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.171355 kubelet[2815]: W1108 00:40:04.171315 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.171597 kubelet[2815]: E1108 00:40:04.171455 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.171656 kubelet[2815]: E1108 00:40:04.171613 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.171656 kubelet[2815]: W1108 00:40:04.171626 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.171656 kubelet[2815]: E1108 00:40:04.171641 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.172061 kubelet[2815]: E1108 00:40:04.172007 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.172179 kubelet[2815]: W1108 00:40:04.172062 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.172179 kubelet[2815]: E1108 00:40:04.172082 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.172903 kubelet[2815]: E1108 00:40:04.172880 2815 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:40:04.172903 kubelet[2815]: W1108 00:40:04.172901 2815 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:40:04.173019 kubelet[2815]: E1108 00:40:04.172918 2815 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:40:04.473907 containerd[1626]: time="2025-11-08T00:40:04.473727421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:04.477840 containerd[1626]: time="2025-11-08T00:40:04.477772465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:40:04.482071 containerd[1626]: time="2025-11-08T00:40:04.482007350Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:04.488159 containerd[1626]: time="2025-11-08T00:40:04.487674016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:04.490500 containerd[1626]: time="2025-11-08T00:40:04.490458166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.619170136s" Nov 8 00:40:04.490665 containerd[1626]: time="2025-11-08T00:40:04.490630573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:40:04.528013 containerd[1626]: time="2025-11-08T00:40:04.527812469Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:40:04.579211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1199808932.mount: Deactivated successfully. Nov 8 00:40:04.582049 containerd[1626]: time="2025-11-08T00:40:04.581995862Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d\"" Nov 8 00:40:04.583341 containerd[1626]: time="2025-11-08T00:40:04.583307515Z" level=info msg="StartContainer for \"4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d\"" Nov 8 00:40:04.738278 containerd[1626]: time="2025-11-08T00:40:04.738038816Z" level=info msg="StartContainer for \"4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d\" returns successfully" Nov 8 00:40:04.808935 containerd[1626]: time="2025-11-08T00:40:04.808815821Z" level=info msg="shim disconnected" id=4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d namespace=k8s.io Nov 8 00:40:04.808935 containerd[1626]: time="2025-11-08T00:40:04.808935960Z" level=warning msg="cleaning up after shim disconnected" id=4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d namespace=k8s.io Nov 8 00:40:04.809450 containerd[1626]: time="2025-11-08T00:40:04.808953812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:40:04.893376 systemd[1]: run-containerd-runc-k8s.io-4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d-runc.LOAYvb.mount: Deactivated successfully. Nov 8 00:40:04.893635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cf1ebeaa73a22180ca45390807c5ca2c499c637cf5ada76a6f592cd6def2d5d-rootfs.mount: Deactivated successfully. Nov 8 00:40:04.926710 kubelet[2815]: E1108 00:40:04.926506 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:05.097812 kubelet[2815]: I1108 00:40:05.096584 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:40:05.101118 containerd[1626]: time="2025-11-08T00:40:05.101057311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:40:06.923812 kubelet[2815]: E1108 00:40:06.923750 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:08.924639 kubelet[2815]: E1108 00:40:08.924567 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:09.956483 containerd[1626]: time="2025-11-08T00:40:09.956400864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:09.957687 containerd[1626]: time="2025-11-08T00:40:09.957608493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:40:09.959589 containerd[1626]: time="2025-11-08T00:40:09.959526280Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:09.964865 containerd[1626]: time="2025-11-08T00:40:09.963890048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:09.964865 containerd[1626]: time="2025-11-08T00:40:09.964696486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.863587037s" Nov 8 00:40:09.964865 containerd[1626]: time="2025-11-08T00:40:09.964734185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:40:09.968491 containerd[1626]: time="2025-11-08T00:40:09.968451248Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:40:10.013366 containerd[1626]: time="2025-11-08T00:40:10.013314591Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9\"" Nov 8 00:40:10.014855 containerd[1626]: time="2025-11-08T00:40:10.014820336Z" level=info msg="StartContainer for \"0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9\"" Nov 8 00:40:10.310685 containerd[1626]: time="2025-11-08T00:40:10.310610368Z" level=info msg="StartContainer for \"0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9\" returns successfully" Nov 8 00:40:10.924635 kubelet[2815]: E1108 00:40:10.924109 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:11.386973 kubelet[2815]: I1108 00:40:11.386894 2815 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:40:11.423804 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9-rootfs.mount: Deactivated successfully. Nov 8 00:40:11.429401 containerd[1626]: time="2025-11-08T00:40:11.426490570Z" level=info msg="shim disconnected" id=0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9 namespace=k8s.io Nov 8 00:40:11.429401 containerd[1626]: time="2025-11-08T00:40:11.426566751Z" level=warning msg="cleaning up after shim disconnected" id=0f8b90bb6b5ef00ca504eb342e6ed4be37834dfb4546dd2b4666ecebff64f1b9 namespace=k8s.io Nov 8 00:40:11.429401 containerd[1626]: time="2025-11-08T00:40:11.426582573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:40:11.526403 containerd[1626]: time="2025-11-08T00:40:11.526334186Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:40:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:40:11.526606 kubelet[2815]: I1108 00:40:11.526578 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b360798f-4525-4a64-8263-1b2065da4cca-config-volume\") pod \"coredns-668d6bf9bc-4ddc6\" (UID: \"b360798f-4525-4a64-8263-1b2065da4cca\") " pod="kube-system/coredns-668d6bf9bc-4ddc6" Nov 8 00:40:11.526718 kubelet[2815]: I1108 00:40:11.526630 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm9b2\" (UniqueName: \"kubernetes.io/projected/321e585a-41b7-4e8f-995a-c57a69c6e824-kube-api-access-dm9b2\") pod \"calico-kube-controllers-7d465f66d6-5v9hs\" (UID: \"321e585a-41b7-4e8f-995a-c57a69c6e824\") " pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" Nov 8 00:40:11.526718 kubelet[2815]: I1108 00:40:11.526667 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57g2q\" (UniqueName: \"kubernetes.io/projected/1c25bfb9-44c6-4360-955b-d1bd985cf551-kube-api-access-57g2q\") pod \"calico-apiserver-86f7fc8b8c-rqgv4\" (UID: \"1c25bfb9-44c6-4360-955b-d1bd985cf551\") " pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" Nov 8 00:40:11.526718 kubelet[2815]: I1108 00:40:11.526706 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcz9\" (UniqueName: \"kubernetes.io/projected/b360798f-4525-4a64-8263-1b2065da4cca-kube-api-access-bzcz9\") pod \"coredns-668d6bf9bc-4ddc6\" (UID: \"b360798f-4525-4a64-8263-1b2065da4cca\") " pod="kube-system/coredns-668d6bf9bc-4ddc6" Nov 8 00:40:11.526921 kubelet[2815]: I1108 00:40:11.526735 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klkkf\" (UniqueName: \"kubernetes.io/projected/5d88aab4-7fae-4885-9d4e-0a85f6911a17-kube-api-access-klkkf\") pod \"coredns-668d6bf9bc-vq2vj\" (UID: \"5d88aab4-7fae-4885-9d4e-0a85f6911a17\") " pod="kube-system/coredns-668d6bf9bc-vq2vj" Nov 8 00:40:11.526921 kubelet[2815]: I1108 00:40:11.526765 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a229dd5-8929-4dea-a351-ff8ac4498f1d-goldmane-ca-bundle\") pod \"goldmane-666569f655-wvrwm\" (UID: \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\") " pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:11.526921 kubelet[2815]: I1108 00:40:11.526796 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d88aab4-7fae-4885-9d4e-0a85f6911a17-config-volume\") pod \"coredns-668d6bf9bc-vq2vj\" (UID: \"5d88aab4-7fae-4885-9d4e-0a85f6911a17\") " pod="kube-system/coredns-668d6bf9bc-vq2vj" Nov 8 00:40:11.526921 kubelet[2815]: I1108 00:40:11.526826 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ad6b56e-2fd0-4653-867f-174ff7a29321-calico-apiserver-certs\") pod \"calico-apiserver-86f7fc8b8c-7f5zz\" (UID: \"0ad6b56e-2fd0-4653-867f-174ff7a29321\") " pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" Nov 8 00:40:11.526921 kubelet[2815]: I1108 00:40:11.526854 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c25bfb9-44c6-4360-955b-d1bd985cf551-calico-apiserver-certs\") pod \"calico-apiserver-86f7fc8b8c-rqgv4\" (UID: \"1c25bfb9-44c6-4360-955b-d1bd985cf551\") " pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" Nov 8 00:40:11.527381 kubelet[2815]: I1108 00:40:11.526896 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvnpg\" (UniqueName: \"kubernetes.io/projected/9a229dd5-8929-4dea-a351-ff8ac4498f1d-kube-api-access-vvnpg\") pod \"goldmane-666569f655-wvrwm\" (UID: \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\") " pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:11.527381 kubelet[2815]: I1108 00:40:11.526925 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3547b141-c897-4b93-a962-c86845f8e62c-whisker-backend-key-pair\") pod \"whisker-7d55b6f9f6-5sdgm\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " pod="calico-system/whisker-7d55b6f9f6-5sdgm" Nov 8 00:40:11.527381 kubelet[2815]: I1108 00:40:11.526966 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a229dd5-8929-4dea-a351-ff8ac4498f1d-config\") pod \"goldmane-666569f655-wvrwm\" (UID: \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\") " pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:11.527381 kubelet[2815]: I1108 00:40:11.526996 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/321e585a-41b7-4e8f-995a-c57a69c6e824-tigera-ca-bundle\") pod \"calico-kube-controllers-7d465f66d6-5v9hs\" (UID: \"321e585a-41b7-4e8f-995a-c57a69c6e824\") " pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" Nov 8 00:40:11.527381 kubelet[2815]: I1108 00:40:11.527052 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqpj8\" (UniqueName: \"kubernetes.io/projected/3547b141-c897-4b93-a962-c86845f8e62c-kube-api-access-sqpj8\") pod \"whisker-7d55b6f9f6-5sdgm\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " pod="calico-system/whisker-7d55b6f9f6-5sdgm" Nov 8 00:40:11.528185 kubelet[2815]: I1108 00:40:11.527094 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7m49\" (UniqueName: \"kubernetes.io/projected/0ad6b56e-2fd0-4653-867f-174ff7a29321-kube-api-access-l7m49\") pod \"calico-apiserver-86f7fc8b8c-7f5zz\" (UID: \"0ad6b56e-2fd0-4653-867f-174ff7a29321\") " pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" Nov 8 00:40:11.528418 kubelet[2815]: I1108 00:40:11.528375 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9a229dd5-8929-4dea-a351-ff8ac4498f1d-goldmane-key-pair\") pod \"goldmane-666569f655-wvrwm\" (UID: \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\") " pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:11.528588 kubelet[2815]: I1108 00:40:11.528552 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3547b141-c897-4b93-a962-c86845f8e62c-whisker-ca-bundle\") pod \"whisker-7d55b6f9f6-5sdgm\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " pod="calico-system/whisker-7d55b6f9f6-5sdgm" Nov 8 00:40:11.715524 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 8 00:40:11.718095 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 8 00:40:11.715596 systemd-resolved[1517]: Flushed all caches. Nov 8 00:40:11.777156 containerd[1626]: time="2025-11-08T00:40:11.775936542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq2vj,Uid:5d88aab4-7fae-4885-9d4e-0a85f6911a17,Namespace:kube-system,Attempt:0,}" Nov 8 00:40:11.777778 containerd[1626]: time="2025-11-08T00:40:11.777484310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ddc6,Uid:b360798f-4525-4a64-8263-1b2065da4cca,Namespace:kube-system,Attempt:0,}" Nov 8 00:40:11.778082 containerd[1626]: time="2025-11-08T00:40:11.778049460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-rqgv4,Uid:1c25bfb9-44c6-4360-955b-d1bd985cf551,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:40:11.781309 containerd[1626]: time="2025-11-08T00:40:11.781257307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d465f66d6-5v9hs,Uid:321e585a-41b7-4e8f-995a-c57a69c6e824,Namespace:calico-system,Attempt:0,}" Nov 8 00:40:11.807651 containerd[1626]: time="2025-11-08T00:40:11.806136262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-7f5zz,Uid:0ad6b56e-2fd0-4653-867f-174ff7a29321,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:40:11.807651 containerd[1626]: time="2025-11-08T00:40:11.806901900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d55b6f9f6-5sdgm,Uid:3547b141-c897-4b93-a962-c86845f8e62c,Namespace:calico-system,Attempt:0,}" Nov 8 00:40:11.818888 containerd[1626]: time="2025-11-08T00:40:11.818852900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wvrwm,Uid:9a229dd5-8929-4dea-a351-ff8ac4498f1d,Namespace:calico-system,Attempt:0,}" Nov 8 00:40:11.848985 kubelet[2815]: I1108 00:40:11.848299 2815 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:40:12.200306 containerd[1626]: time="2025-11-08T00:40:12.200107681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:40:12.343192 containerd[1626]: time="2025-11-08T00:40:12.341911425Z" level=error msg="Failed to destroy network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.343192 containerd[1626]: time="2025-11-08T00:40:12.343071737Z" level=error msg="Failed to destroy network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.349388 containerd[1626]: time="2025-11-08T00:40:12.349346898Z" level=error msg="encountered an error cleaning up failed sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.349890 containerd[1626]: time="2025-11-08T00:40:12.349852836Z" level=error msg="encountered an error cleaning up failed sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.358729 containerd[1626]: time="2025-11-08T00:40:12.358676231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-rqgv4,Uid:1c25bfb9-44c6-4360-955b-d1bd985cf551,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.367396 containerd[1626]: time="2025-11-08T00:40:12.366985069Z" level=error msg="Failed to destroy network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.367530 containerd[1626]: time="2025-11-08T00:40:12.367476740Z" level=error msg="encountered an error cleaning up failed sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.367595 containerd[1626]: time="2025-11-08T00:40:12.367532464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq2vj,Uid:5d88aab4-7fae-4885-9d4e-0a85f6911a17,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.368142 containerd[1626]: time="2025-11-08T00:40:12.367755218Z" level=error msg="Failed to destroy network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.368142 containerd[1626]: time="2025-11-08T00:40:12.368100714Z" level=error msg="encountered an error cleaning up failed sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.368279 containerd[1626]: time="2025-11-08T00:40:12.368209239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d465f66d6-5v9hs,Uid:321e585a-41b7-4e8f-995a-c57a69c6e824,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.368426 containerd[1626]: time="2025-11-08T00:40:12.368327532Z" level=error msg="Failed to destroy network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.368666062Z" level=error msg="encountered an error cleaning up failed sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.368708771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d55b6f9f6-5sdgm,Uid:3547b141-c897-4b93-a962-c86845f8e62c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.368769453Z" level=error msg="Failed to destroy network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.368825581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ddc6,Uid:b360798f-4525-4a64-8263-1b2065da4cca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.369297023Z" level=error msg="Failed to destroy network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.369454226Z" level=error msg="encountered an error cleaning up failed sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.369508040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wvrwm,Uid:9a229dd5-8929-4dea-a351-ff8ac4498f1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.369848517Z" level=error msg="encountered an error cleaning up failed sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.371629 containerd[1626]: time="2025-11-08T00:40:12.369936854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-7f5zz,Uid:0ad6b56e-2fd0-4653-867f-174ff7a29321,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.372503 kubelet[2815]: E1108 00:40:12.370506 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.372503 kubelet[2815]: E1108 00:40:12.370625 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4ddc6" Nov 8 00:40:12.372503 kubelet[2815]: E1108 00:40:12.370668 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4ddc6" Nov 8 00:40:12.373090 kubelet[2815]: E1108 00:40:12.370765 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4ddc6_kube-system(b360798f-4525-4a64-8263-1b2065da4cca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4ddc6_kube-system(b360798f-4525-4a64-8263-1b2065da4cca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4ddc6" podUID="b360798f-4525-4a64-8263-1b2065da4cca" Nov 8 00:40:12.373090 kubelet[2815]: E1108 00:40:12.370818 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.373090 kubelet[2815]: E1108 00:40:12.370900 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" Nov 8 00:40:12.373090 kubelet[2815]: E1108 00:40:12.370921 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.373464 kubelet[2815]: E1108 00:40:12.370936 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" Nov 8 00:40:12.373464 kubelet[2815]: E1108 00:40:12.370966 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" Nov 8 00:40:12.373464 kubelet[2815]: E1108 00:40:12.370989 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:12.374078 kubelet[2815]: E1108 00:40:12.370998 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" Nov 8 00:40:12.374078 kubelet[2815]: E1108 00:40:12.371024 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.374078 kubelet[2815]: E1108 00:40:12.371054 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:12.374078 kubelet[2815]: E1108 00:40:12.371080 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wvrwm" Nov 8 00:40:12.374351 kubelet[2815]: E1108 00:40:12.371104 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.374351 kubelet[2815]: E1108 00:40:12.371163 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vq2vj" Nov 8 00:40:12.374351 kubelet[2815]: E1108 00:40:12.371204 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-vq2vj" Nov 8 00:40:12.374506 kubelet[2815]: E1108 00:40:12.371204 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:12.374506 kubelet[2815]: E1108 00:40:12.371055 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:12.374674 kubelet[2815]: E1108 00:40:12.371247 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vq2vj_kube-system(5d88aab4-7fae-4885-9d4e-0a85f6911a17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vq2vj_kube-system(5d88aab4-7fae-4885-9d4e-0a85f6911a17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vq2vj" podUID="5d88aab4-7fae-4885-9d4e-0a85f6911a17" Nov 8 00:40:12.374674 kubelet[2815]: E1108 00:40:12.371277 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.374674 kubelet[2815]: E1108 00:40:12.371311 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d55b6f9f6-5sdgm" Nov 8 00:40:12.375119 kubelet[2815]: E1108 00:40:12.371334 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d55b6f9f6-5sdgm" Nov 8 00:40:12.375119 kubelet[2815]: E1108 00:40:12.371372 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d55b6f9f6-5sdgm_calico-system(3547b141-c897-4b93-a962-c86845f8e62c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d55b6f9f6-5sdgm_calico-system(3547b141-c897-4b93-a962-c86845f8e62c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d55b6f9f6-5sdgm" podUID="3547b141-c897-4b93-a962-c86845f8e62c" Nov 8 00:40:12.375119 kubelet[2815]: E1108 00:40:12.371417 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:12.375872 kubelet[2815]: E1108 00:40:12.371448 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" Nov 8 00:40:12.375872 kubelet[2815]: E1108 00:40:12.371471 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" Nov 8 00:40:12.375872 kubelet[2815]: E1108 00:40:12.373255 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:12.927767 containerd[1626]: time="2025-11-08T00:40:12.927677101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frtm6,Uid:732940a2-6d95-4610-b476-89508bce10b7,Namespace:calico-system,Attempt:0,}" Nov 8 00:40:13.020268 containerd[1626]: time="2025-11-08T00:40:13.020196350Z" level=error msg="Failed to destroy network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.021531 containerd[1626]: time="2025-11-08T00:40:13.021486526Z" level=error msg="encountered an error cleaning up failed sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.021618 containerd[1626]: time="2025-11-08T00:40:13.021556643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frtm6,Uid:732940a2-6d95-4610-b476-89508bce10b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.021843 kubelet[2815]: E1108 00:40:13.021797 2815 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.021943 kubelet[2815]: E1108 00:40:13.021882 2815 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frtm6" Nov 8 00:40:13.021943 kubelet[2815]: E1108 00:40:13.021922 2815 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-frtm6" Nov 8 00:40:13.022052 kubelet[2815]: E1108 00:40:13.021987 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:13.024371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61-shm.mount: Deactivated successfully. Nov 8 00:40:13.169011 kubelet[2815]: I1108 00:40:13.168768 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:13.172375 kubelet[2815]: I1108 00:40:13.171894 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:13.194434 kubelet[2815]: I1108 00:40:13.194291 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:13.198828 kubelet[2815]: I1108 00:40:13.198783 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:13.202736 kubelet[2815]: I1108 00:40:13.202707 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:13.208314 kubelet[2815]: I1108 00:40:13.208284 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:13.211427 kubelet[2815]: I1108 00:40:13.211396 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:13.214756 kubelet[2815]: I1108 00:40:13.214726 2815 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:13.223248 containerd[1626]: time="2025-11-08T00:40:13.222445352Z" level=info msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" Nov 8 00:40:13.223248 containerd[1626]: time="2025-11-08T00:40:13.223081433Z" level=info msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" Nov 8 00:40:13.225141 containerd[1626]: time="2025-11-08T00:40:13.224976017Z" level=info msg="Ensure that sandbox f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb in task-service has been cleanup successfully" Nov 8 00:40:13.225141 containerd[1626]: time="2025-11-08T00:40:13.225074804Z" level=info msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" Nov 8 00:40:13.225441 containerd[1626]: time="2025-11-08T00:40:13.224993008Z" level=info msg="Ensure that sandbox e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61 in task-service has been cleanup successfully" Nov 8 00:40:13.225441 containerd[1626]: time="2025-11-08T00:40:13.225402640Z" level=info msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" Nov 8 00:40:13.225863 containerd[1626]: time="2025-11-08T00:40:13.225735139Z" level=info msg="Ensure that sandbox 5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9 in task-service has been cleanup successfully" Nov 8 00:40:13.226243 containerd[1626]: time="2025-11-08T00:40:13.226044695Z" level=info msg="Ensure that sandbox 5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253 in task-service has been cleanup successfully" Nov 8 00:40:13.227216 containerd[1626]: time="2025-11-08T00:40:13.227182101Z" level=info msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" Nov 8 00:40:13.227420 containerd[1626]: time="2025-11-08T00:40:13.227383163Z" level=info msg="Ensure that sandbox 8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640 in task-service has been cleanup successfully" Nov 8 00:40:13.228438 containerd[1626]: time="2025-11-08T00:40:13.228379210Z" level=info msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" Nov 8 00:40:13.229155 containerd[1626]: time="2025-11-08T00:40:13.229104703Z" level=info msg="Ensure that sandbox 1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3 in task-service has been cleanup successfully" Nov 8 00:40:13.231616 containerd[1626]: time="2025-11-08T00:40:13.231582844Z" level=info msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" Nov 8 00:40:13.232183 containerd[1626]: time="2025-11-08T00:40:13.231924021Z" level=info msg="Ensure that sandbox b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9 in task-service has been cleanup successfully" Nov 8 00:40:13.237141 containerd[1626]: time="2025-11-08T00:40:13.237045888Z" level=info msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" Nov 8 00:40:13.266058 containerd[1626]: time="2025-11-08T00:40:13.265995674Z" level=info msg="Ensure that sandbox 121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b in task-service has been cleanup successfully" Nov 8 00:40:13.377683 containerd[1626]: time="2025-11-08T00:40:13.377620086Z" level=error msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" failed" error="failed to destroy network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.378231 kubelet[2815]: E1108 00:40:13.378118 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:13.394248 kubelet[2815]: E1108 00:40:13.378260 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3"} Nov 8 00:40:13.394386 kubelet[2815]: E1108 00:40:13.394284 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"321e585a-41b7-4e8f-995a-c57a69c6e824\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.394386 kubelet[2815]: E1108 00:40:13.394326 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"321e585a-41b7-4e8f-995a-c57a69c6e824\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:13.405534 containerd[1626]: time="2025-11-08T00:40:13.405120436Z" level=error msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" failed" error="failed to destroy network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.405762 kubelet[2815]: E1108 00:40:13.405529 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:13.405762 kubelet[2815]: E1108 00:40:13.405591 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253"} Nov 8 00:40:13.405762 kubelet[2815]: E1108 00:40:13.405658 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3547b141-c897-4b93-a962-c86845f8e62c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.405762 kubelet[2815]: E1108 00:40:13.405692 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3547b141-c897-4b93-a962-c86845f8e62c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d55b6f9f6-5sdgm" podUID="3547b141-c897-4b93-a962-c86845f8e62c" Nov 8 00:40:13.447658 containerd[1626]: time="2025-11-08T00:40:13.446680031Z" level=error msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" failed" error="failed to destroy network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.450467 containerd[1626]: time="2025-11-08T00:40:13.450397933Z" level=error msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" failed" error="failed to destroy network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.453305 containerd[1626]: time="2025-11-08T00:40:13.453233628Z" level=error msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" failed" error="failed to destroy network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.453631 kubelet[2815]: E1108 00:40:13.453564 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:13.453758 kubelet[2815]: E1108 00:40:13.453641 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61"} Nov 8 00:40:13.453758 kubelet[2815]: E1108 00:40:13.453699 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"732940a2-6d95-4610-b476-89508bce10b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.454290 kubelet[2815]: E1108 00:40:13.453746 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"732940a2-6d95-4610-b476-89508bce10b7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:13.454290 kubelet[2815]: E1108 00:40:13.453830 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:13.454290 kubelet[2815]: E1108 00:40:13.453888 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb"} Nov 8 00:40:13.454290 kubelet[2815]: E1108 00:40:13.453921 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b360798f-4525-4a64-8263-1b2065da4cca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.454993 kubelet[2815]: E1108 00:40:13.453957 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b360798f-4525-4a64-8263-1b2065da4cca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4ddc6" podUID="b360798f-4525-4a64-8263-1b2065da4cca" Nov 8 00:40:13.454993 kubelet[2815]: E1108 00:40:13.454096 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:13.454993 kubelet[2815]: E1108 00:40:13.454167 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9"} Nov 8 00:40:13.454993 kubelet[2815]: E1108 00:40:13.454204 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c25bfb9-44c6-4360-955b-d1bd985cf551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.455451 kubelet[2815]: E1108 00:40:13.454242 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c25bfb9-44c6-4360-955b-d1bd985cf551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:13.461177 containerd[1626]: time="2025-11-08T00:40:13.460729758Z" level=error msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" failed" error="failed to destroy network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.461366 kubelet[2815]: E1108 00:40:13.461275 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:13.461366 kubelet[2815]: E1108 00:40:13.461320 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640"} Nov 8 00:40:13.461959 kubelet[2815]: E1108 00:40:13.461372 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.461959 kubelet[2815]: E1108 00:40:13.461402 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a229dd5-8929-4dea-a351-ff8ac4498f1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:13.466898 containerd[1626]: time="2025-11-08T00:40:13.466854112Z" level=error msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" failed" error="failed to destroy network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.467840 kubelet[2815]: E1108 00:40:13.467799 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:13.467942 kubelet[2815]: E1108 00:40:13.467849 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9"} Nov 8 00:40:13.467942 kubelet[2815]: E1108 00:40:13.467890 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ad6b56e-2fd0-4653-867f-174ff7a29321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.467942 kubelet[2815]: E1108 00:40:13.467930 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ad6b56e-2fd0-4653-867f-174ff7a29321\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:13.469333 containerd[1626]: time="2025-11-08T00:40:13.469236628Z" level=error msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" failed" error="failed to destroy network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:40:13.469704 kubelet[2815]: E1108 00:40:13.469647 2815 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:13.469823 kubelet[2815]: E1108 00:40:13.469702 2815 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b"} Nov 8 00:40:13.469823 kubelet[2815]: E1108 00:40:13.469766 2815 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d88aab4-7fae-4885-9d4e-0a85f6911a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:40:13.469823 kubelet[2815]: E1108 00:40:13.469796 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d88aab4-7fae-4885-9d4e-0a85f6911a17\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-vq2vj" podUID="5d88aab4-7fae-4885-9d4e-0a85f6911a17" Nov 8 00:40:21.987695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3108847091.mount: Deactivated successfully. Nov 8 00:40:22.083366 containerd[1626]: time="2025-11-08T00:40:22.082940389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:40:22.094272 containerd[1626]: time="2025-11-08T00:40:22.094052409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.882542634s" Nov 8 00:40:22.094272 containerd[1626]: time="2025-11-08T00:40:22.094113342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:40:22.118424 containerd[1626]: time="2025-11-08T00:40:22.118340808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:22.167219 containerd[1626]: time="2025-11-08T00:40:22.166910336Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:22.169039 containerd[1626]: time="2025-11-08T00:40:22.167887652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:40:22.176943 containerd[1626]: time="2025-11-08T00:40:22.176864628Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:40:22.245119 containerd[1626]: time="2025-11-08T00:40:22.243813719Z" level=info msg="CreateContainer within sandbox \"298f3dde743a84974f0d6f1d4982955df07134f252b3ef7f4360beaeefb35afd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25eac93899ccb871ae48eef26d3f73f75a32c247f5b39b5c13110fe8d89cb7b3\"" Nov 8 00:40:22.246820 containerd[1626]: time="2025-11-08T00:40:22.246390556Z" level=info msg="StartContainer for \"25eac93899ccb871ae48eef26d3f73f75a32c247f5b39b5c13110fe8d89cb7b3\"" Nov 8 00:40:22.444716 containerd[1626]: time="2025-11-08T00:40:22.444653845Z" level=info msg="StartContainer for \"25eac93899ccb871ae48eef26d3f73f75a32c247f5b39b5c13110fe8d89cb7b3\" returns successfully" Nov 8 00:40:22.585686 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:40:22.585897 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:40:22.831273 containerd[1626]: time="2025-11-08T00:40:22.830781862Z" level=info msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" Nov 8 00:40:23.373063 kubelet[2815]: I1108 00:40:23.368875 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9vqf9" podStartSLOduration=2.604695436 podStartE2EDuration="25.352690538s" podCreationTimestamp="2025-11-08 00:39:58 +0000 UTC" firstStartedPulling="2025-11-08 00:39:59.348443699 +0000 UTC m=+22.621957792" lastFinishedPulling="2025-11-08 00:40:22.096438791 +0000 UTC m=+45.369952894" observedRunningTime="2025-11-08 00:40:23.35099216 +0000 UTC m=+46.624506284" watchObservedRunningTime="2025-11-08 00:40:23.352690538 +0000 UTC m=+46.626204635" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:22.996 [INFO][4007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:22.996 [INFO][4007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" iface="eth0" netns="/var/run/netns/cni-4f9212c1-65cf-2a68-f7b3-51e3bd518fa6" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.000 [INFO][4007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" iface="eth0" netns="/var/run/netns/cni-4f9212c1-65cf-2a68-f7b3-51e3bd518fa6" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.010 [INFO][4007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" iface="eth0" netns="/var/run/netns/cni-4f9212c1-65cf-2a68-f7b3-51e3bd518fa6" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.010 [INFO][4007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.010 [INFO][4007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.491 [INFO][4016] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.497 [INFO][4016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.498 [INFO][4016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.515 [WARNING][4016] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.515 [INFO][4016] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.518 [INFO][4016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:23.528262 containerd[1626]: 2025-11-08 00:40:23.521 [INFO][4007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:23.530258 systemd[1]: run-netns-cni\x2d4f9212c1\x2d65cf\x2d2a68\x2df7b3\x2d51e3bd518fa6.mount: Deactivated successfully. Nov 8 00:40:23.542028 containerd[1626]: time="2025-11-08T00:40:23.541956765Z" level=info msg="TearDown network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" successfully" Nov 8 00:40:23.542028 containerd[1626]: time="2025-11-08T00:40:23.542013215Z" level=info msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" returns successfully" Nov 8 00:40:23.649990 kubelet[2815]: I1108 00:40:23.649757 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqpj8\" (UniqueName: \"kubernetes.io/projected/3547b141-c897-4b93-a962-c86845f8e62c-kube-api-access-sqpj8\") pod \"3547b141-c897-4b93-a962-c86845f8e62c\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " Nov 8 00:40:23.650456 kubelet[2815]: I1108 00:40:23.649870 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3547b141-c897-4b93-a962-c86845f8e62c-whisker-ca-bundle\") pod \"3547b141-c897-4b93-a962-c86845f8e62c\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " Nov 8 00:40:23.650456 kubelet[2815]: I1108 00:40:23.650297 2815 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3547b141-c897-4b93-a962-c86845f8e62c-whisker-backend-key-pair\") pod \"3547b141-c897-4b93-a962-c86845f8e62c\" (UID: \"3547b141-c897-4b93-a962-c86845f8e62c\") " Nov 8 00:40:23.670350 kubelet[2815]: I1108 00:40:23.667521 2815 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3547b141-c897-4b93-a962-c86845f8e62c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3547b141-c897-4b93-a962-c86845f8e62c" (UID: "3547b141-c897-4b93-a962-c86845f8e62c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:40:23.670350 kubelet[2815]: I1108 00:40:23.667949 2815 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3547b141-c897-4b93-a962-c86845f8e62c-kube-api-access-sqpj8" (OuterVolumeSpecName: "kube-api-access-sqpj8") pod "3547b141-c897-4b93-a962-c86845f8e62c" (UID: "3547b141-c897-4b93-a962-c86845f8e62c"). InnerVolumeSpecName "kube-api-access-sqpj8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:40:23.674420 systemd[1]: var-lib-kubelet-pods-3547b141\x2dc897\x2d4b93\x2da962\x2dc86845f8e62c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsqpj8.mount: Deactivated successfully. Nov 8 00:40:23.680501 systemd[1]: var-lib-kubelet-pods-3547b141\x2dc897\x2d4b93\x2da962\x2dc86845f8e62c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:40:23.689649 systemd-journald[1183]: Under memory pressure, flushing caches. Nov 8 00:40:23.685949 systemd-resolved[1517]: Under memory pressure, flushing caches. Nov 8 00:40:23.691684 kubelet[2815]: I1108 00:40:23.687417 2815 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3547b141-c897-4b93-a962-c86845f8e62c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3547b141-c897-4b93-a962-c86845f8e62c" (UID: "3547b141-c897-4b93-a962-c86845f8e62c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:40:23.686028 systemd-resolved[1517]: Flushed all caches. Nov 8 00:40:23.751089 kubelet[2815]: I1108 00:40:23.750977 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3547b141-c897-4b93-a962-c86845f8e62c-whisker-backend-key-pair\") on node \"srv-77jcb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 00:40:23.751089 kubelet[2815]: I1108 00:40:23.751039 2815 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sqpj8\" (UniqueName: \"kubernetes.io/projected/3547b141-c897-4b93-a962-c86845f8e62c-kube-api-access-sqpj8\") on node \"srv-77jcb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 00:40:23.751089 kubelet[2815]: I1108 00:40:23.751072 2815 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3547b141-c897-4b93-a962-c86845f8e62c-whisker-ca-bundle\") on node \"srv-77jcb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 00:40:23.926536 containerd[1626]: time="2025-11-08T00:40:23.926367094Z" level=info msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.023 [INFO][4066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.023 [INFO][4066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" iface="eth0" netns="/var/run/netns/cni-09c312b7-b9d8-4547-2900-e6a472455ed7" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.024 [INFO][4066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" iface="eth0" netns="/var/run/netns/cni-09c312b7-b9d8-4547-2900-e6a472455ed7" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.024 [INFO][4066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" iface="eth0" netns="/var/run/netns/cni-09c312b7-b9d8-4547-2900-e6a472455ed7" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.024 [INFO][4066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.024 [INFO][4066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.066 [INFO][4073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.066 [INFO][4073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.066 [INFO][4073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.076 [WARNING][4073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.076 [INFO][4073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.078 [INFO][4073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:24.084271 containerd[1626]: 2025-11-08 00:40:24.081 [INFO][4066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:24.085332 containerd[1626]: time="2025-11-08T00:40:24.085289288Z" level=info msg="TearDown network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" successfully" Nov 8 00:40:24.085452 containerd[1626]: time="2025-11-08T00:40:24.085333734Z" level=info msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" returns successfully" Nov 8 00:40:24.088572 containerd[1626]: time="2025-11-08T00:40:24.088521059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d465f66d6-5v9hs,Uid:321e585a-41b7-4e8f-995a-c57a69c6e824,Namespace:calico-system,Attempt:1,}" Nov 8 00:40:24.093802 systemd[1]: run-netns-cni\x2d09c312b7\x2db9d8\x2d4547\x2d2900\x2de6a472455ed7.mount: Deactivated successfully. Nov 8 00:40:24.364500 systemd-networkd[1261]: calie456abe0a30: Link UP Nov 8 00:40:24.376183 systemd-networkd[1261]: calie456abe0a30: Gained carrier Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.162 [INFO][4080] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.178 [INFO][4080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0 calico-kube-controllers-7d465f66d6- calico-system 321e585a-41b7-4e8f-995a-c57a69c6e824 885 0 2025-11-08 00:39:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d465f66d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com calico-kube-controllers-7d465f66d6-5v9hs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie456abe0a30 [] [] }} ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.178 [INFO][4080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.227 [INFO][4092] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" HandleID="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.227 [INFO][4092] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" HandleID="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5770), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"calico-kube-controllers-7d465f66d6-5v9hs", "timestamp":"2025-11-08 00:40:24.227433674 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.227 [INFO][4092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.227 [INFO][4092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.227 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.242 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.260 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.267 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.270 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.273 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.273 [INFO][4092] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.276 [INFO][4092] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.285 [INFO][4092] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.308 [INFO][4092] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.129/26] block=192.168.12.128/26 handle="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.308 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.129/26] handle="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.308 [INFO][4092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:24.521316 containerd[1626]: 2025-11-08 00:40:24.308 [INFO][4092] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.129/26] IPv6=[] ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" HandleID="k8s-pod-network.c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.313 [INFO][4080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0", GenerateName:"calico-kube-controllers-7d465f66d6-", Namespace:"calico-system", SelfLink:"", UID:"321e585a-41b7-4e8f-995a-c57a69c6e824", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d465f66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7d465f66d6-5v9hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie456abe0a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.313 [INFO][4080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.129/32] ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.313 [INFO][4080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie456abe0a30 ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.414 [INFO][4080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.426 [INFO][4080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0", GenerateName:"calico-kube-controllers-7d465f66d6-", Namespace:"calico-system", SelfLink:"", UID:"321e585a-41b7-4e8f-995a-c57a69c6e824", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d465f66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba", Pod:"calico-kube-controllers-7d465f66d6-5v9hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie456abe0a30", MAC:"ee:5c:23:a1:02:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:24.527597 containerd[1626]: 2025-11-08 00:40:24.489 [INFO][4080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba" Namespace="calico-system" Pod="calico-kube-controllers-7d465f66d6-5v9hs" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:24.665519 kubelet[2815]: I1108 00:40:24.660761 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff4e3273-e49c-43ab-a17a-ef1a2a65c067-whisker-ca-bundle\") pod \"whisker-84d66d4898-wkptg\" (UID: \"ff4e3273-e49c-43ab-a17a-ef1a2a65c067\") " pod="calico-system/whisker-84d66d4898-wkptg" Nov 8 00:40:24.665519 kubelet[2815]: I1108 00:40:24.660884 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ff4e3273-e49c-43ab-a17a-ef1a2a65c067-whisker-backend-key-pair\") pod \"whisker-84d66d4898-wkptg\" (UID: \"ff4e3273-e49c-43ab-a17a-ef1a2a65c067\") " pod="calico-system/whisker-84d66d4898-wkptg" Nov 8 00:40:24.665519 kubelet[2815]: I1108 00:40:24.660941 2815 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cftv8\" (UniqueName: \"kubernetes.io/projected/ff4e3273-e49c-43ab-a17a-ef1a2a65c067-kube-api-access-cftv8\") pod \"whisker-84d66d4898-wkptg\" (UID: \"ff4e3273-e49c-43ab-a17a-ef1a2a65c067\") " pod="calico-system/whisker-84d66d4898-wkptg" Nov 8 00:40:24.759692 containerd[1626]: time="2025-11-08T00:40:24.757313460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:24.759692 containerd[1626]: time="2025-11-08T00:40:24.759234867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:24.759692 containerd[1626]: time="2025-11-08T00:40:24.759260640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:24.764298 containerd[1626]: time="2025-11-08T00:40:24.763238283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:24.921569 containerd[1626]: time="2025-11-08T00:40:24.916396879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d66d4898-wkptg,Uid:ff4e3273-e49c-43ab-a17a-ef1a2a65c067,Namespace:calico-system,Attempt:0,}" Nov 8 00:40:24.930404 containerd[1626]: time="2025-11-08T00:40:24.929841384Z" level=info msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" Nov 8 00:40:24.934172 containerd[1626]: time="2025-11-08T00:40:24.933956228Z" level=info msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" Nov 8 00:40:24.948305 kubelet[2815]: I1108 00:40:24.947105 2815 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3547b141-c897-4b93-a962-c86845f8e62c" path="/var/lib/kubelet/pods/3547b141-c897-4b93-a962-c86845f8e62c/volumes" Nov 8 00:40:24.950321 containerd[1626]: time="2025-11-08T00:40:24.950250593Z" level=info msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" Nov 8 00:40:25.326211 containerd[1626]: time="2025-11-08T00:40:25.324881586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d465f66d6-5v9hs,Uid:321e585a-41b7-4e8f-995a-c57a69c6e824,Namespace:calico-system,Attempt:1,} returns sandbox id \"c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba\"" Nov 8 00:40:25.342402 containerd[1626]: time="2025-11-08T00:40:25.340318690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:40:25.757322 containerd[1626]: time="2025-11-08T00:40:25.757015749Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:25.833724 containerd[1626]: time="2025-11-08T00:40:25.761687033Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:40:25.834685 containerd[1626]: time="2025-11-08T00:40:25.765519312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:25.844932 kubelet[2815]: E1108 00:40:25.839703 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:25.867061 kubelet[2815]: E1108 00:40:25.865181 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:25.983542 kubelet[2815]: E1108 00:40:25.983164 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm9b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:25.986274 kubelet[2815]: E1108 00:40:25.985850 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:25.993402 containerd[1626]: time="2025-11-08T00:40:25.992984375Z" level=info msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" Nov 8 00:40:26.018507 systemd-networkd[1261]: calidd6f6ffdd3a: Link UP Nov 8 00:40:26.027617 systemd-networkd[1261]: calidd6f6ffdd3a: Gained carrier Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.579 [INFO][4269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.583 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" iface="eth0" netns="/var/run/netns/cni-4b5eb852-2782-b4a4-e9b8-0ebb290365c3" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.585 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" iface="eth0" netns="/var/run/netns/cni-4b5eb852-2782-b4a4-e9b8-0ebb290365c3" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.586 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" iface="eth0" netns="/var/run/netns/cni-4b5eb852-2782-b4a4-e9b8-0ebb290365c3" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.587 [INFO][4269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.587 [INFO][4269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.866 [INFO][4319] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.868 [INFO][4319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.934 [INFO][4319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.974 [WARNING][4319] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.974 [INFO][4319] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:25.987 [INFO][4319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.038559 containerd[1626]: 2025-11-08 00:40:26.023 [INFO][4269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:26.039301 containerd[1626]: time="2025-11-08T00:40:26.038753745Z" level=info msg="TearDown network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" successfully" Nov 8 00:40:26.039301 containerd[1626]: time="2025-11-08T00:40:26.038797003Z" level=info msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" returns successfully" Nov 8 00:40:26.055928 systemd[1]: run-netns-cni\x2d4b5eb852\x2d2782\x2db4a4\x2de9b8\x2d0ebb290365c3.mount: Deactivated successfully. Nov 8 00:40:26.063026 containerd[1626]: time="2025-11-08T00:40:26.062904339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-rqgv4,Uid:1c25bfb9-44c6-4360-955b-d1bd985cf551,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.594 [INFO][4284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.599 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" iface="eth0" netns="/var/run/netns/cni-ad886a0a-d5b2-cc59-6282-8f3fa6c3f553" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.605 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" iface="eth0" netns="/var/run/netns/cni-ad886a0a-d5b2-cc59-6282-8f3fa6c3f553" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.605 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" iface="eth0" netns="/var/run/netns/cni-ad886a0a-d5b2-cc59-6282-8f3fa6c3f553" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.605 [INFO][4284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.605 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.902 [INFO][4321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.903 [INFO][4321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:25.987 [INFO][4321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:26.062 [WARNING][4321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:26.063 [INFO][4321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:26.070 [INFO][4321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.090344 containerd[1626]: 2025-11-08 00:40:26.076 [INFO][4284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:26.091032 containerd[1626]: time="2025-11-08T00:40:26.090942479Z" level=info msg="TearDown network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" successfully" Nov 8 00:40:26.091032 containerd[1626]: time="2025-11-08T00:40:26.090980034Z" level=info msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" returns successfully" Nov 8 00:40:26.104637 systemd[1]: run-netns-cni\x2dad886a0a\x2dd5b2\x2dcc59\x2d6282\x2d8f3fa6c3f553.mount: Deactivated successfully. Nov 8 00:40:26.159767 containerd[1626]: time="2025-11-08T00:40:26.159516518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq2vj,Uid:5d88aab4-7fae-4885-9d4e-0a85f6911a17,Namespace:kube-system,Attempt:1,}" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.415 [INFO][4257] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.500 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0 whisker-84d66d4898- calico-system ff4e3273-e49c-43ab-a17a-ef1a2a65c067 906 0 2025-11-08 00:40:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:84d66d4898 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com whisker-84d66d4898-wkptg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd6f6ffdd3a [] [] }} ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.500 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.762 [INFO][4308] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" HandleID="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.772 [INFO][4308] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" HandleID="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101a90), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"whisker-84d66d4898-wkptg", "timestamp":"2025-11-08 00:40:25.762320217 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.772 [INFO][4308] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.772 [INFO][4308] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.772 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.800 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.837 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.869 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.880 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.894 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.894 [INFO][4308] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.898 [INFO][4308] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.907 [INFO][4308] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.928 [INFO][4308] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.130/26] block=192.168.12.128/26 handle="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.928 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.130/26] handle="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.928 [INFO][4308] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.180415 containerd[1626]: 2025-11-08 00:40:25.928 [INFO][4308] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.130/26] IPv6=[] ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" HandleID="k8s-pod-network.97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:25.949 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0", GenerateName:"whisker-84d66d4898-", Namespace:"calico-system", SelfLink:"", UID:"ff4e3273-e49c-43ab-a17a-ef1a2a65c067", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 40, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d66d4898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"whisker-84d66d4898-wkptg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd6f6ffdd3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:25.950 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.130/32] ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:25.950 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd6f6ffdd3a ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:26.032 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:26.034 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0", GenerateName:"whisker-84d66d4898-", Namespace:"calico-system", SelfLink:"", UID:"ff4e3273-e49c-43ab-a17a-ef1a2a65c067", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 40, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"84d66d4898", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab", Pod:"whisker-84d66d4898-wkptg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd6f6ffdd3a", MAC:"5a:c6:d2:2b:ea:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:26.184829 containerd[1626]: 2025-11-08 00:40:26.068 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab" Namespace="calico-system" Pod="whisker-84d66d4898-wkptg" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--84d66d4898--wkptg-eth0" Nov 8 00:40:26.183253 systemd-networkd[1261]: calie456abe0a30: Gained IPv6LL Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.583 [INFO][4283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.583 [INFO][4283] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" iface="eth0" netns="/var/run/netns/cni-061e8836-19bf-1102-63c1-9ca5c7644f41" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.584 [INFO][4283] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" iface="eth0" netns="/var/run/netns/cni-061e8836-19bf-1102-63c1-9ca5c7644f41" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.586 [INFO][4283] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" iface="eth0" netns="/var/run/netns/cni-061e8836-19bf-1102-63c1-9ca5c7644f41" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.586 [INFO][4283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.587 [INFO][4283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.946 [INFO][4318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:25.946 [INFO][4318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:26.072 [INFO][4318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:26.118 [WARNING][4318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:26.124 [INFO][4318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:26.138 [INFO][4318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.190452 containerd[1626]: 2025-11-08 00:40:26.149 [INFO][4283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:26.191325 containerd[1626]: time="2025-11-08T00:40:26.191262474Z" level=info msg="TearDown network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" successfully" Nov 8 00:40:26.191325 containerd[1626]: time="2025-11-08T00:40:26.191299352Z" level=info msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" returns successfully" Nov 8 00:40:26.200859 systemd[1]: run-netns-cni\x2d061e8836\x2d19bf\x2d1102\x2d63c1\x2d9ca5c7644f41.mount: Deactivated successfully. Nov 8 00:40:26.207528 containerd[1626]: time="2025-11-08T00:40:26.207477165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-7f5zz,Uid:0ad6b56e-2fd0-4653-867f-174ff7a29321,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:40:26.277172 kernel: bpftool[4427]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:40:26.373418 kubelet[2815]: E1108 00:40:26.371973 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:26.514853 containerd[1626]: time="2025-11-08T00:40:26.514511368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:26.514853 containerd[1626]: time="2025-11-08T00:40:26.514617967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:26.514853 containerd[1626]: time="2025-11-08T00:40:26.514684658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:26.519667 containerd[1626]: time="2025-11-08T00:40:26.518473730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.417 [INFO][4365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.417 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" iface="eth0" netns="/var/run/netns/cni-0ab495da-72ed-333c-d690-dcb85cf9fe28" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.417 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" iface="eth0" netns="/var/run/netns/cni-0ab495da-72ed-333c-d690-dcb85cf9fe28" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.418 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" iface="eth0" netns="/var/run/netns/cni-0ab495da-72ed-333c-d690-dcb85cf9fe28" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.418 [INFO][4365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.418 [INFO][4365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.638 [INFO][4459] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.638 [INFO][4459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.638 [INFO][4459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.673 [WARNING][4459] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.673 [INFO][4459] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.679 [INFO][4459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.698511 containerd[1626]: 2025-11-08 00:40:26.689 [INFO][4365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:26.699232 containerd[1626]: time="2025-11-08T00:40:26.698757819Z" level=info msg="TearDown network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" successfully" Nov 8 00:40:26.699232 containerd[1626]: time="2025-11-08T00:40:26.698823322Z" level=info msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" returns successfully" Nov 8 00:40:26.712627 containerd[1626]: time="2025-11-08T00:40:26.710804302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ddc6,Uid:b360798f-4525-4a64-8263-1b2065da4cca,Namespace:kube-system,Attempt:1,}" Nov 8 00:40:26.887014 containerd[1626]: time="2025-11-08T00:40:26.885087005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d66d4898-wkptg,Uid:ff4e3273-e49c-43ab-a17a-ef1a2a65c067,Namespace:calico-system,Attempt:0,} returns sandbox id \"97a926ea4e5747a5818ee2e9403ec7079dea694d16c7c20d7dd359c8c23577ab\"" Nov 8 00:40:26.901400 systemd-networkd[1261]: calib42f5c03eb9: Link UP Nov 8 00:40:26.909851 systemd-networkd[1261]: calib42f5c03eb9: Gained carrier Nov 8 00:40:26.921800 containerd[1626]: time="2025-11-08T00:40:26.921435220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:40:26.940717 containerd[1626]: time="2025-11-08T00:40:26.940671997Z" level=info msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.461 [INFO][4402] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0 calico-apiserver-86f7fc8b8c- calico-apiserver 1c25bfb9-44c6-4360-955b-d1bd985cf551 912 0 2025-11-08 00:39:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f7fc8b8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com calico-apiserver-86f7fc8b8c-rqgv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib42f5c03eb9 [] [] }} ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.462 [INFO][4402] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.725 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" HandleID="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.727 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" HandleID="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-77jcb.gb1.brightbox.com", "pod":"calico-apiserver-86f7fc8b8c-rqgv4", "timestamp":"2025-11-08 00:40:26.72510362 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.728 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.728 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.730 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.757 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.773 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.791 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.796 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.811 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.811 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.817 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878 Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.843 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.853 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.131/26] block=192.168.12.128/26 handle="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.853 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.131/26] handle="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.853 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:26.970186 containerd[1626]: 2025-11-08 00:40:26.853 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.131/26] IPv6=[] ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" HandleID="k8s-pod-network.1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.880 [INFO][4402] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c25bfb9-44c6-4360-955b-d1bd985cf551", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-86f7fc8b8c-rqgv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42f5c03eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.883 [INFO][4402] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.131/32] ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.883 [INFO][4402] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib42f5c03eb9 ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.916 [INFO][4402] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.936 [INFO][4402] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c25bfb9-44c6-4360-955b-d1bd985cf551", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878", Pod:"calico-apiserver-86f7fc8b8c-rqgv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42f5c03eb9", MAC:"ce:10:24:91:9d:c8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:26.974298 containerd[1626]: 2025-11-08 00:40:26.962 [INFO][4402] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-rqgv4" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:27.110977 systemd-networkd[1261]: cali90cec46aed6: Link UP Nov 8 00:40:27.116010 systemd-networkd[1261]: cali90cec46aed6: Gained carrier Nov 8 00:40:27.116041 systemd[1]: run-netns-cni\x2d0ab495da\x2d72ed\x2d333c\x2dd690\x2ddcb85cf9fe28.mount: Deactivated successfully. Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.605 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0 calico-apiserver-86f7fc8b8c- calico-apiserver 0ad6b56e-2fd0-4653-867f-174ff7a29321 913 0 2025-11-08 00:39:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f7fc8b8c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com calico-apiserver-86f7fc8b8c-7f5zz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90cec46aed6 [] [] }} ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.605 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.858 [INFO][4492] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" HandleID="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.869 [INFO][4492] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" HandleID="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000124ce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-77jcb.gb1.brightbox.com", "pod":"calico-apiserver-86f7fc8b8c-7f5zz", "timestamp":"2025-11-08 00:40:26.858890764 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.869 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.869 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.869 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.889 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.928 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.968 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.979 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.988 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:26.988 [INFO][4492] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.001 [INFO][4492] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.028 [INFO][4492] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.053 [INFO][4492] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.132/26] block=192.168.12.128/26 handle="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.053 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.132/26] handle="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.053 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:27.168732 containerd[1626]: 2025-11-08 00:40:27.053 [INFO][4492] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.132/26] IPv6=[] ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" HandleID="k8s-pod-network.b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.067 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6b56e-2fd0-4653-867f-174ff7a29321", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-86f7fc8b8c-7f5zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90cec46aed6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.067 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.132/32] ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.067 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90cec46aed6 ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.126 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.135 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6b56e-2fd0-4653-867f-174ff7a29321", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c", Pod:"calico-apiserver-86f7fc8b8c-7f5zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90cec46aed6", MAC:"b2:df:7b:c2:eb:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.173843 containerd[1626]: 2025-11-08 00:40:27.159 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c" Namespace="calico-apiserver" Pod="calico-apiserver-86f7fc8b8c-7f5zz" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:27.203172 containerd[1626]: time="2025-11-08T00:40:27.196832250Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:27.203172 containerd[1626]: time="2025-11-08T00:40:27.198055851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:27.203172 containerd[1626]: time="2025-11-08T00:40:27.198080525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:27.203172 containerd[1626]: time="2025-11-08T00:40:27.200415357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:27.225304 systemd-networkd[1261]: vxlan.calico: Link UP Nov 8 00:40:27.225700 systemd-networkd[1261]: vxlan.calico: Gained carrier Nov 8 00:40:27.401168 containerd[1626]: time="2025-11-08T00:40:27.366362873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:27.401168 containerd[1626]: time="2025-11-08T00:40:27.366490356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:27.401168 containerd[1626]: time="2025-11-08T00:40:27.366510905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:27.401168 containerd[1626]: time="2025-11-08T00:40:27.366682301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:27.419812 containerd[1626]: time="2025-11-08T00:40:27.418414177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:27.437210 containerd[1626]: time="2025-11-08T00:40:27.436981911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:40:27.437210 containerd[1626]: time="2025-11-08T00:40:27.437105319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:40:27.439001 kubelet[2815]: E1108 00:40:27.438334 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:27.439001 kubelet[2815]: E1108 00:40:27.438401 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:27.439001 kubelet[2815]: E1108 00:40:27.438549 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:901a64b030a14723b934dd11dbc62d64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:27.455247 containerd[1626]: time="2025-11-08T00:40:27.454223939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:40:27.627444 systemd-networkd[1261]: cali89dad23c969: Link UP Nov 8 00:40:27.627764 systemd-networkd[1261]: cali89dad23c969: Gained carrier Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:26.760 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0 coredns-668d6bf9bc- kube-system 5d88aab4-7fae-4885-9d4e-0a85f6911a17 914 0 2025-11-08 00:39:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com coredns-668d6bf9bc-vq2vj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89dad23c969 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:26.769 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.221 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" HandleID="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.221 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" HandleID="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c36a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-vq2vj", "timestamp":"2025-11-08 00:40:27.221500919 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.221 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.221 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.267 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.325 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.396 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.420 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.428 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.433 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.433 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.462 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.509 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.526 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.133/26] block=192.168.12.128/26 handle="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.526 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.133/26] handle="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.526 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:27.704528 containerd[1626]: 2025-11-08 00:40:27.526 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.133/26] IPv6=[] ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" HandleID="k8s-pod-network.d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.585 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d88aab4-7fae-4885-9d4e-0a85f6911a17", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-vq2vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dad23c969", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.585 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.133/32] ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.585 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89dad23c969 ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.622 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.624 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d88aab4-7fae-4885-9d4e-0a85f6911a17", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb", Pod:"coredns-668d6bf9bc-vq2vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dad23c969", MAC:"ce:c2:d5:b8:ae:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.717307 containerd[1626]: 2025-11-08 00:40:27.695 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb" Namespace="kube-system" Pod="coredns-668d6bf9bc-vq2vj" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:27.755051 containerd[1626]: time="2025-11-08T00:40:27.752536202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-rqgv4,Uid:1c25bfb9-44c6-4360-955b-d1bd985cf551,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878\"" Nov 8 00:40:27.774472 systemd-networkd[1261]: calidd6f6ffdd3a: Gained IPv6LL Nov 8 00:40:27.820705 systemd-networkd[1261]: calie3a70d64b6f: Link UP Nov 8 00:40:27.825746 systemd-networkd[1261]: calie3a70d64b6f: Gained carrier Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.510 [INFO][4562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.512 [INFO][4562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" iface="eth0" netns="/var/run/netns/cni-a9497d40-21e8-f230-e664-1c29312ff83d" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.515 [INFO][4562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" iface="eth0" netns="/var/run/netns/cni-a9497d40-21e8-f230-e664-1c29312ff83d" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.520 [INFO][4562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" iface="eth0" netns="/var/run/netns/cni-a9497d40-21e8-f230-e664-1c29312ff83d" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.520 [INFO][4562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.520 [INFO][4562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.862 [INFO][4685] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.862 [INFO][4685] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.863 [INFO][4685] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.876 [WARNING][4685] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.876 [INFO][4685] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.883 [INFO][4685] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:27.910573 containerd[1626]: 2025-11-08 00:40:27.896 [INFO][4562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.146 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0 coredns-668d6bf9bc- kube-system b360798f-4525-4a64-8263-1b2065da4cca 929 0 2025-11-08 00:39:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com coredns-668d6bf9bc-4ddc6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie3a70d64b6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.146 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.442 [INFO][4589] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" HandleID="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.445 [INFO][4589] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" HandleID="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043fdd0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-4ddc6", "timestamp":"2025-11-08 00:40:27.442593671 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.445 [INFO][4589] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.532 [INFO][4589] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.532 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.636 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.676 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.707 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.730 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.735 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.736 [INFO][4589] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.742 [INFO][4589] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9 Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.756 [INFO][4589] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.776 [INFO][4589] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.134/26] block=192.168.12.128/26 handle="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.779 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.134/26] handle="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.780 [INFO][4589] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:27.915864 containerd[1626]: 2025-11-08 00:40:27.780 [INFO][4589] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.134/26] IPv6=[] ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" HandleID="k8s-pod-network.53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.806 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b360798f-4525-4a64-8263-1b2065da4cca", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-4ddc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3a70d64b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.806 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.134/32] ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.806 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3a70d64b6f ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.829 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.847 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b360798f-4525-4a64-8263-1b2065da4cca", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9", Pod:"coredns-668d6bf9bc-4ddc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3a70d64b6f", MAC:"42:a6:1f:67:11:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:27.917975 containerd[1626]: 2025-11-08 00:40:27.890 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9" Namespace="kube-system" Pod="coredns-668d6bf9bc-4ddc6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:27.921112 containerd[1626]: time="2025-11-08T00:40:27.919583692Z" level=info msg="TearDown network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" successfully" Nov 8 00:40:27.921112 containerd[1626]: time="2025-11-08T00:40:27.919623839Z" level=info msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" returns successfully" Nov 8 00:40:27.921821 containerd[1626]: time="2025-11-08T00:40:27.921785058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frtm6,Uid:732940a2-6d95-4610-b476-89508bce10b7,Namespace:calico-system,Attempt:1,}" Nov 8 00:40:27.957575 containerd[1626]: time="2025-11-08T00:40:27.957340261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f7fc8b8c-7f5zz,Uid:0ad6b56e-2fd0-4653-867f-174ff7a29321,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c\"" Nov 8 00:40:27.975367 containerd[1626]: time="2025-11-08T00:40:27.973208665Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:27.975367 containerd[1626]: time="2025-11-08T00:40:27.975022783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:27.976923 containerd[1626]: time="2025-11-08T00:40:27.975940668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:27.978809 containerd[1626]: time="2025-11-08T00:40:27.978659946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:28.007346 containerd[1626]: time="2025-11-08T00:40:28.006727244Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:28.010232 containerd[1626]: time="2025-11-08T00:40:28.009307795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:40:28.010232 containerd[1626]: time="2025-11-08T00:40:28.009541341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:28.012877 kubelet[2815]: E1108 00:40:28.010695 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:28.012877 kubelet[2815]: E1108 00:40:28.010784 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:28.012877 kubelet[2815]: E1108 00:40:28.011370 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:28.013179 containerd[1626]: time="2025-11-08T00:40:28.011426981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:28.013455 kubelet[2815]: E1108 00:40:28.013286 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:40:28.090694 containerd[1626]: time="2025-11-08T00:40:28.090499619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:28.093088 containerd[1626]: time="2025-11-08T00:40:28.091649960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:28.093088 containerd[1626]: time="2025-11-08T00:40:28.091684089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:28.093088 containerd[1626]: time="2025-11-08T00:40:28.091877052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:28.122638 systemd[1]: run-containerd-runc-k8s.io-b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c-runc.UFh0VT.mount: Deactivated successfully. Nov 8 00:40:28.122895 systemd[1]: run-netns-cni\x2da9497d40\x2d21e8\x2df230\x2de664\x2d1c29312ff83d.mount: Deactivated successfully. Nov 8 00:40:28.159284 systemd-networkd[1261]: calib42f5c03eb9: Gained IPv6LL Nov 8 00:40:28.230205 containerd[1626]: time="2025-11-08T00:40:28.228097249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vq2vj,Uid:5d88aab4-7fae-4885-9d4e-0a85f6911a17,Namespace:kube-system,Attempt:1,} returns sandbox id \"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb\"" Nov 8 00:40:28.251367 containerd[1626]: time="2025-11-08T00:40:28.250834709Z" level=info msg="CreateContainer within sandbox \"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:40:28.306919 containerd[1626]: time="2025-11-08T00:40:28.306813176Z" level=info msg="CreateContainer within sandbox \"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"636518b1f1d4ca5b96dc12b576adbae752e76e629c4bdec877bea11c7141e0a3\"" Nov 8 00:40:28.309847 containerd[1626]: time="2025-11-08T00:40:28.309513655Z" level=info msg="StartContainer for \"636518b1f1d4ca5b96dc12b576adbae752e76e629c4bdec877bea11c7141e0a3\"" Nov 8 00:40:28.310037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232082903.mount: Deactivated successfully. Nov 8 00:40:28.359924 containerd[1626]: time="2025-11-08T00:40:28.358955314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4ddc6,Uid:b360798f-4525-4a64-8263-1b2065da4cca,Namespace:kube-system,Attempt:1,} returns sandbox id \"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9\"" Nov 8 00:40:28.376680 containerd[1626]: time="2025-11-08T00:40:28.376472895Z" level=info msg="CreateContainer within sandbox \"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:40:28.378360 containerd[1626]: time="2025-11-08T00:40:28.378258921Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:28.389484 containerd[1626]: time="2025-11-08T00:40:28.389416246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:28.389642 containerd[1626]: time="2025-11-08T00:40:28.389546636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:28.392296 kubelet[2815]: E1108 00:40:28.389997 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:28.392296 kubelet[2815]: E1108 00:40:28.391312 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:28.392450 containerd[1626]: time="2025-11-08T00:40:28.391715128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:28.392774 kubelet[2815]: E1108 00:40:28.392701 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57g2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:28.395001 kubelet[2815]: E1108 00:40:28.394773 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:28.419156 containerd[1626]: time="2025-11-08T00:40:28.418996549Z" level=info msg="CreateContainer within sandbox \"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3f3b77b5e9427b8e54330c1fe91d9a067cbfd17e278ef41f9718bdd205c722b\"" Nov 8 00:40:28.451569 containerd[1626]: time="2025-11-08T00:40:28.450874543Z" level=info msg="StartContainer for \"a3f3b77b5e9427b8e54330c1fe91d9a067cbfd17e278ef41f9718bdd205c722b\"" Nov 8 00:40:28.544844 systemd-networkd[1261]: cali90cec46aed6: Gained IPv6LL Nov 8 00:40:28.570109 kubelet[2815]: E1108 00:40:28.569787 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:28.598378 kubelet[2815]: E1108 00:40:28.596719 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:40:28.674797 systemd-networkd[1261]: vxlan.calico: Gained IPv6LL Nov 8 00:40:28.703217 containerd[1626]: time="2025-11-08T00:40:28.702813489Z" level=info msg="StartContainer for \"636518b1f1d4ca5b96dc12b576adbae752e76e629c4bdec877bea11c7141e0a3\" returns successfully" Nov 8 00:40:28.711702 systemd-networkd[1261]: cali2cfa9ba1e19: Link UP Nov 8 00:40:28.712001 systemd-networkd[1261]: cali2cfa9ba1e19: Gained carrier Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.242 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0 csi-node-driver- calico-system 732940a2-6d95-4610-b476-89508bce10b7 941 0 2025-11-08 00:39:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com csi-node-driver-frtm6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2cfa9ba1e19 [] [] }} ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.246 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.412 [INFO][4849] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" HandleID="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.414 [INFO][4849] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" HandleID="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f920), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"csi-node-driver-frtm6", "timestamp":"2025-11-08 00:40:28.412535175 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.415 [INFO][4849] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.415 [INFO][4849] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.415 [INFO][4849] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.450 [INFO][4849] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.480 [INFO][4849] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.525 [INFO][4849] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.550 [INFO][4849] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.567 [INFO][4849] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.567 [INFO][4849] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.590 [INFO][4849] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.624 [INFO][4849] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.668 [INFO][4849] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.135/26] block=192.168.12.128/26 handle="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.669 [INFO][4849] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.135/26] handle="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.670 [INFO][4849] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:28.762166 containerd[1626]: 2025-11-08 00:40:28.671 [INFO][4849] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.135/26] IPv6=[] ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" HandleID="k8s-pod-network.a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.698 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"732940a2-6d95-4610-b476-89508bce10b7", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-frtm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cfa9ba1e19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.699 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.135/32] ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.699 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cfa9ba1e19 ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.712 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.715 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"732940a2-6d95-4610-b476-89508bce10b7", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c", Pod:"csi-node-driver-frtm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cfa9ba1e19", MAC:"b2:81:59:90:d0:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:28.763120 containerd[1626]: 2025-11-08 00:40:28.746 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c" Namespace="calico-system" Pod="csi-node-driver-frtm6" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:28.788202 containerd[1626]: time="2025-11-08T00:40:28.788096335Z" level=info msg="StartContainer for \"a3f3b77b5e9427b8e54330c1fe91d9a067cbfd17e278ef41f9718bdd205c722b\" returns successfully" Nov 8 00:40:28.829865 containerd[1626]: time="2025-11-08T00:40:28.829814056Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:28.835378 containerd[1626]: time="2025-11-08T00:40:28.833024738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:28.835378 containerd[1626]: time="2025-11-08T00:40:28.833316172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:28.835566 kubelet[2815]: E1108 00:40:28.833736 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:28.835566 kubelet[2815]: E1108 00:40:28.833813 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:28.835566 kubelet[2815]: E1108 00:40:28.833971 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7m49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:28.837066 kubelet[2815]: E1108 00:40:28.836406 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:28.866056 containerd[1626]: time="2025-11-08T00:40:28.864325112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:28.866056 containerd[1626]: time="2025-11-08T00:40:28.864410069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:28.866056 containerd[1626]: time="2025-11-08T00:40:28.864429041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:28.866056 containerd[1626]: time="2025-11-08T00:40:28.864570087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:28.954887 containerd[1626]: time="2025-11-08T00:40:28.953564672Z" level=info msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" Nov 8 00:40:29.058258 containerd[1626]: time="2025-11-08T00:40:29.058204140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-frtm6,Uid:732940a2-6d95-4610-b476-89508bce10b7,Namespace:calico-system,Attempt:1,} returns sandbox id \"a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c\"" Nov 8 00:40:29.062330 containerd[1626]: time="2025-11-08T00:40:29.062302342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.100 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.100 [INFO][5009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" iface="eth0" netns="/var/run/netns/cni-0c0ab603-21e0-55b6-c8ae-0a4d85f4647e" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.102 [INFO][5009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" iface="eth0" netns="/var/run/netns/cni-0c0ab603-21e0-55b6-c8ae-0a4d85f4647e" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.102 [INFO][5009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" iface="eth0" netns="/var/run/netns/cni-0c0ab603-21e0-55b6-c8ae-0a4d85f4647e" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.102 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.102 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.146 [INFO][5024] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.146 [INFO][5024] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.146 [INFO][5024] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.156 [WARNING][5024] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.156 [INFO][5024] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.158 [INFO][5024] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:29.164669 containerd[1626]: 2025-11-08 00:40:29.162 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:29.165980 containerd[1626]: time="2025-11-08T00:40:29.164776694Z" level=info msg="TearDown network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" successfully" Nov 8 00:40:29.165980 containerd[1626]: time="2025-11-08T00:40:29.164832726Z" level=info msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" returns successfully" Nov 8 00:40:29.169827 containerd[1626]: time="2025-11-08T00:40:29.168961153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wvrwm,Uid:9a229dd5-8929-4dea-a351-ff8ac4498f1d,Namespace:calico-system,Attempt:1,}" Nov 8 00:40:29.171636 systemd[1]: run-netns-cni\x2d0c0ab603\x2d21e0\x2d55b6\x2dc8ae\x2d0a4d85f4647e.mount: Deactivated successfully. Nov 8 00:40:29.182673 systemd-networkd[1261]: calie3a70d64b6f: Gained IPv6LL Nov 8 00:40:29.341069 systemd-networkd[1261]: cali192204ccb69: Link UP Nov 8 00:40:29.342504 systemd-networkd[1261]: cali192204ccb69: Gained carrier Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.241 [INFO][5030] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0 goldmane-666569f655- calico-system 9a229dd5-8929-4dea-a351-ff8ac4498f1d 983 0 2025-11-08 00:39:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-77jcb.gb1.brightbox.com goldmane-666569f655-wvrwm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali192204ccb69 [] [] }} ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.241 [INFO][5030] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.284 [INFO][5042] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" HandleID="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.284 [INFO][5042] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" HandleID="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ff60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-77jcb.gb1.brightbox.com", "pod":"goldmane-666569f655-wvrwm", "timestamp":"2025-11-08 00:40:29.284264174 +0000 UTC"}, Hostname:"srv-77jcb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.284 [INFO][5042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.284 [INFO][5042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.284 [INFO][5042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-77jcb.gb1.brightbox.com' Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.294 [INFO][5042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.300 [INFO][5042] ipam/ipam.go 394: Looking up existing affinities for host host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.307 [INFO][5042] ipam/ipam.go 511: Trying affinity for 192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.311 [INFO][5042] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.315 [INFO][5042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.128/26 host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.315 [INFO][5042] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.128/26 handle="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.317 [INFO][5042] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08 Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.323 [INFO][5042] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.128/26 handle="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.332 [INFO][5042] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.136/26] block=192.168.12.128/26 handle="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.332 [INFO][5042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.136/26] handle="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" host="srv-77jcb.gb1.brightbox.com" Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.332 [INFO][5042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:29.376003 containerd[1626]: 2025-11-08 00:40:29.332 [INFO][5042] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.136/26] IPv6=[] ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" HandleID="k8s-pod-network.fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.336 [INFO][5030] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a229dd5-8929-4dea-a351-ff8ac4498f1d", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-wvrwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali192204ccb69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.336 [INFO][5030] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.136/32] ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.336 [INFO][5030] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali192204ccb69 ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.343 [INFO][5030] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.345 [INFO][5030] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a229dd5-8929-4dea-a351-ff8ac4498f1d", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08", Pod:"goldmane-666569f655-wvrwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali192204ccb69", MAC:"12:f8:c4:63:03:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:29.381312 containerd[1626]: 2025-11-08 00:40:29.365 [INFO][5030] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08" Namespace="calico-system" Pod="goldmane-666569f655-wvrwm" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:29.393218 containerd[1626]: time="2025-11-08T00:40:29.393158065Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:29.397600 containerd[1626]: time="2025-11-08T00:40:29.397551097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:40:29.397710 containerd[1626]: time="2025-11-08T00:40:29.397663275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:40:29.397989 kubelet[2815]: E1108 00:40:29.397931 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:29.398277 kubelet[2815]: E1108 00:40:29.398007 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:29.398277 kubelet[2815]: E1108 00:40:29.398195 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:29.401947 containerd[1626]: time="2025-11-08T00:40:29.401916019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:40:29.427539 containerd[1626]: time="2025-11-08T00:40:29.426650713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:40:29.427539 containerd[1626]: time="2025-11-08T00:40:29.426747318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:40:29.427539 containerd[1626]: time="2025-11-08T00:40:29.426771945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:29.427539 containerd[1626]: time="2025-11-08T00:40:29.426947562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:40:29.515727 containerd[1626]: time="2025-11-08T00:40:29.515673602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wvrwm,Uid:9a229dd5-8929-4dea-a351-ff8ac4498f1d,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08\"" Nov 8 00:40:29.567209 systemd-networkd[1261]: cali89dad23c969: Gained IPv6LL Nov 8 00:40:29.579408 kubelet[2815]: E1108 00:40:29.579002 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:29.579408 kubelet[2815]: E1108 00:40:29.579261 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:29.597515 kubelet[2815]: I1108 00:40:29.596652 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4ddc6" podStartSLOduration=48.596621925 podStartE2EDuration="48.596621925s" podCreationTimestamp="2025-11-08 00:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:40:29.587517006 +0000 UTC m=+52.861031128" watchObservedRunningTime="2025-11-08 00:40:29.596621925 +0000 UTC m=+52.870136036" Nov 8 00:40:29.631928 kubelet[2815]: I1108 00:40:29.630623 2815 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vq2vj" podStartSLOduration=48.630601914 podStartE2EDuration="48.630601914s" podCreationTimestamp="2025-11-08 00:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:40:29.629253709 +0000 UTC m=+52.902767825" watchObservedRunningTime="2025-11-08 00:40:29.630601914 +0000 UTC m=+52.904116014" Nov 8 00:40:29.739566 containerd[1626]: time="2025-11-08T00:40:29.739179075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:29.741300 containerd[1626]: time="2025-11-08T00:40:29.741222588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:40:29.744071 containerd[1626]: time="2025-11-08T00:40:29.741387904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:40:29.744071 containerd[1626]: time="2025-11-08T00:40:29.742882706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:40:29.744270 kubelet[2815]: E1108 00:40:29.741730 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:29.744270 kubelet[2815]: E1108 00:40:29.741798 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:29.744270 kubelet[2815]: E1108 00:40:29.742079 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:29.747522 kubelet[2815]: E1108 00:40:29.745621 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:30.078722 containerd[1626]: time="2025-11-08T00:40:30.078662120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:30.081271 containerd[1626]: time="2025-11-08T00:40:30.080361718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:40:30.081651 containerd[1626]: time="2025-11-08T00:40:30.081224696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:30.082807 kubelet[2815]: E1108 00:40:30.081900 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:30.082807 kubelet[2815]: E1108 00:40:30.081973 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:30.082807 kubelet[2815]: E1108 00:40:30.082291 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvnpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:30.083855 kubelet[2815]: E1108 00:40:30.083727 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:30.462507 systemd-networkd[1261]: cali2cfa9ba1e19: Gained IPv6LL Nov 8 00:40:30.583093 kubelet[2815]: E1108 00:40:30.581712 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:30.584450 kubelet[2815]: E1108 00:40:30.583402 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:30.654625 systemd-networkd[1261]: cali192204ccb69: Gained IPv6LL Nov 8 00:40:36.925560 containerd[1626]: time="2025-11-08T00:40:36.925505150Z" level=info msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:36.990 [WARNING][5122] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b360798f-4525-4a64-8263-1b2065da4cca", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9", Pod:"coredns-668d6bf9bc-4ddc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3a70d64b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:36.991 [INFO][5122] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:36.991 [INFO][5122] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" iface="eth0" netns="" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:36.991 [INFO][5122] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:36.991 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.046 [INFO][5131] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.047 [INFO][5131] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.047 [INFO][5131] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.058 [WARNING][5131] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.058 [INFO][5131] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.060 [INFO][5131] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.065086 containerd[1626]: 2025-11-08 00:40:37.062 [INFO][5122] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.066048 containerd[1626]: time="2025-11-08T00:40:37.065101690Z" level=info msg="TearDown network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" successfully" Nov 8 00:40:37.066048 containerd[1626]: time="2025-11-08T00:40:37.065183804Z" level=info msg="StopPodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" returns successfully" Nov 8 00:40:37.067336 containerd[1626]: time="2025-11-08T00:40:37.067266546Z" level=info msg="RemovePodSandbox for \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" Nov 8 00:40:37.067426 containerd[1626]: time="2025-11-08T00:40:37.067362705Z" level=info msg="Forcibly stopping sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\"" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.114 [WARNING][5145] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b360798f-4525-4a64-8263-1b2065da4cca", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"53e2a34d61c8c6dbeb01cca476d88cc04f0443fe6376498e34920afcad7e46f9", Pod:"coredns-668d6bf9bc-4ddc6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie3a70d64b6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.114 [INFO][5145] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.114 [INFO][5145] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" iface="eth0" netns="" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.114 [INFO][5145] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.114 [INFO][5145] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.145 [INFO][5153] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.146 [INFO][5153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.146 [INFO][5153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.154 [WARNING][5153] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.155 [INFO][5153] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" HandleID="k8s-pod-network.f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4ddc6-eth0" Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.156 [INFO][5153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.160497 containerd[1626]: 2025-11-08 00:40:37.158 [INFO][5145] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb" Nov 8 00:40:37.161673 containerd[1626]: time="2025-11-08T00:40:37.160572088Z" level=info msg="TearDown network for sandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" successfully" Nov 8 00:40:37.181001 containerd[1626]: time="2025-11-08T00:40:37.180660866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:37.181001 containerd[1626]: time="2025-11-08T00:40:37.180757238Z" level=info msg="RemovePodSandbox \"f3243c471b34005dd36d0b0215fc28f6c009810b79d0ec3e61038b843fb855fb\" returns successfully" Nov 8 00:40:37.182654 containerd[1626]: time="2025-11-08T00:40:37.182554048Z" level=info msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.229 [WARNING][5167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0", GenerateName:"calico-kube-controllers-7d465f66d6-", Namespace:"calico-system", SelfLink:"", UID:"321e585a-41b7-4e8f-995a-c57a69c6e824", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d465f66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba", Pod:"calico-kube-controllers-7d465f66d6-5v9hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie456abe0a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.230 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.230 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" iface="eth0" netns="" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.230 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.230 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.262 [INFO][5174] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.263 [INFO][5174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.263 [INFO][5174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.272 [WARNING][5174] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.272 [INFO][5174] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.273 [INFO][5174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.277484 containerd[1626]: 2025-11-08 00:40:37.275 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.279703 containerd[1626]: time="2025-11-08T00:40:37.277526116Z" level=info msg="TearDown network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" successfully" Nov 8 00:40:37.279703 containerd[1626]: time="2025-11-08T00:40:37.277563204Z" level=info msg="StopPodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" returns successfully" Nov 8 00:40:37.279703 containerd[1626]: time="2025-11-08T00:40:37.278595988Z" level=info msg="RemovePodSandbox for \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" Nov 8 00:40:37.279703 containerd[1626]: time="2025-11-08T00:40:37.278636176Z" level=info msg="Forcibly stopping sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\"" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.327 [WARNING][5188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0", GenerateName:"calico-kube-controllers-7d465f66d6-", Namespace:"calico-system", SelfLink:"", UID:"321e585a-41b7-4e8f-995a-c57a69c6e824", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d465f66d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"c10ae75ec670cd1c2eb3e43aba5050979cd2b6047927cb1e53b8dd39fde30fba", Pod:"calico-kube-controllers-7d465f66d6-5v9hs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie456abe0a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.328 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.328 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" iface="eth0" netns="" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.328 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.328 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.372 [INFO][5195] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.373 [INFO][5195] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.373 [INFO][5195] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.381 [WARNING][5195] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.381 [INFO][5195] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" HandleID="k8s-pod-network.1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--kube--controllers--7d465f66d6--5v9hs-eth0" Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.383 [INFO][5195] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.387475 containerd[1626]: 2025-11-08 00:40:37.385 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3" Nov 8 00:40:37.388505 containerd[1626]: time="2025-11-08T00:40:37.387531373Z" level=info msg="TearDown network for sandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" successfully" Nov 8 00:40:37.391007 containerd[1626]: time="2025-11-08T00:40:37.390954822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:37.391539 containerd[1626]: time="2025-11-08T00:40:37.391022933Z" level=info msg="RemovePodSandbox \"1b4ab01ec52ccf41b50c513a539a9551615b0ac5f8c7cbb667fe8fe3143694c3\" returns successfully" Nov 8 00:40:37.391662 containerd[1626]: time="2025-11-08T00:40:37.391629432Z" level=info msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.439 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c25bfb9-44c6-4360-955b-d1bd985cf551", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878", Pod:"calico-apiserver-86f7fc8b8c-rqgv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42f5c03eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.439 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.439 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" iface="eth0" netns="" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.439 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.439 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.473 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.473 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.473 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.481 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.481 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.483 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.487734 containerd[1626]: 2025-11-08 00:40:37.485 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.489202 containerd[1626]: time="2025-11-08T00:40:37.487695465Z" level=info msg="TearDown network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" successfully" Nov 8 00:40:37.489906 containerd[1626]: time="2025-11-08T00:40:37.489205544Z" level=info msg="StopPodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" returns successfully" Nov 8 00:40:37.489906 containerd[1626]: time="2025-11-08T00:40:37.489819682Z" level=info msg="RemovePodSandbox for \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" Nov 8 00:40:37.489906 containerd[1626]: time="2025-11-08T00:40:37.489859507Z" level=info msg="Forcibly stopping sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\"" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.538 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c25bfb9-44c6-4360-955b-d1bd985cf551", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"1157eda3916ae37ffd778ffd632ba9eba11e224ee7e05546f2466d737ec3b878", Pod:"calico-apiserver-86f7fc8b8c-rqgv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42f5c03eb9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.538 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.538 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" iface="eth0" netns="" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.538 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.538 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.569 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.569 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.569 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.579 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.579 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" HandleID="k8s-pod-network.b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--rqgv4-eth0" Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.582 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.586375 containerd[1626]: 2025-11-08 00:40:37.584 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9" Nov 8 00:40:37.588659 containerd[1626]: time="2025-11-08T00:40:37.586342888Z" level=info msg="TearDown network for sandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" successfully" Nov 8 00:40:37.592847 containerd[1626]: time="2025-11-08T00:40:37.592773908Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:37.592847 containerd[1626]: time="2025-11-08T00:40:37.592841962Z" level=info msg="RemovePodSandbox \"b411aa307dfbc6687b85e2f2aaccdf898fb95a9d1047ead1c506d5df2e72e3e9\" returns successfully" Nov 8 00:40:37.594254 containerd[1626]: time="2025-11-08T00:40:37.593789729Z" level=info msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.646 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6b56e-2fd0-4653-867f-174ff7a29321", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c", Pod:"calico-apiserver-86f7fc8b8c-7f5zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90cec46aed6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.647 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.647 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" iface="eth0" netns="" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.647 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.647 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.675 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.675 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.675 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.684 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.684 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.686 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.690403 containerd[1626]: 2025-11-08 00:40:37.688 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.690403 containerd[1626]: time="2025-11-08T00:40:37.689999878Z" level=info msg="TearDown network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" successfully" Nov 8 00:40:37.690403 containerd[1626]: time="2025-11-08T00:40:37.690035629Z" level=info msg="StopPodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" returns successfully" Nov 8 00:40:37.691828 containerd[1626]: time="2025-11-08T00:40:37.691707806Z" level=info msg="RemovePodSandbox for \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" Nov 8 00:40:37.691828 containerd[1626]: time="2025-11-08T00:40:37.691755599Z" level=info msg="Forcibly stopping sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\"" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.742 [WARNING][5278] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0", GenerateName:"calico-apiserver-86f7fc8b8c-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ad6b56e-2fd0-4653-867f-174ff7a29321", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f7fc8b8c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"b9add0cd3e7b1091e09e05225898fa6612d0e2df92d4545de59e5b2b8416536c", Pod:"calico-apiserver-86f7fc8b8c-7f5zz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90cec46aed6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.742 [INFO][5278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.742 [INFO][5278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" iface="eth0" netns="" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.742 [INFO][5278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.742 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.774 [INFO][5285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.775 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.775 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.784 [WARNING][5285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.784 [INFO][5285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" HandleID="k8s-pod-network.5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Workload="srv--77jcb.gb1.brightbox.com-k8s-calico--apiserver--86f7fc8b8c--7f5zz-eth0" Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.786 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.790988 containerd[1626]: 2025-11-08 00:40:37.788 [INFO][5278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9" Nov 8 00:40:37.792551 containerd[1626]: time="2025-11-08T00:40:37.792218512Z" level=info msg="TearDown network for sandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" successfully" Nov 8 00:40:37.796434 containerd[1626]: time="2025-11-08T00:40:37.796399918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:37.796594 containerd[1626]: time="2025-11-08T00:40:37.796557213Z" level=info msg="RemovePodSandbox \"5717dfeddb3434b4e081b1479f46b7a3cfc9aed8e64645ae038fcae5f887f9e9\" returns successfully" Nov 8 00:40:37.797804 containerd[1626]: time="2025-11-08T00:40:37.797389636Z" level=info msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.848 [WARNING][5299] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d88aab4-7fae-4885-9d4e-0a85f6911a17", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb", Pod:"coredns-668d6bf9bc-vq2vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dad23c969", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.849 [INFO][5299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.849 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" iface="eth0" netns="" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.849 [INFO][5299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.849 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.878 [INFO][5306] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.879 [INFO][5306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.879 [INFO][5306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.887 [WARNING][5306] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.888 [INFO][5306] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.889 [INFO][5306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:37.894004 containerd[1626]: 2025-11-08 00:40:37.891 [INFO][5299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:37.895085 containerd[1626]: time="2025-11-08T00:40:37.894867373Z" level=info msg="TearDown network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" successfully" Nov 8 00:40:37.895085 containerd[1626]: time="2025-11-08T00:40:37.894921432Z" level=info msg="StopPodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" returns successfully" Nov 8 00:40:37.895663 containerd[1626]: time="2025-11-08T00:40:37.895584546Z" level=info msg="RemovePodSandbox for \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" Nov 8 00:40:37.895749 containerd[1626]: time="2025-11-08T00:40:37.895676569Z" level=info msg="Forcibly stopping sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\"" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.946 [WARNING][5320] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d88aab4-7fae-4885-9d4e-0a85f6911a17", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"d96a33de8e3bed11c4fac5645fec76262b0312bcf8e3633659b22a3014342dcb", Pod:"coredns-668d6bf9bc-vq2vj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89dad23c969", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.947 [INFO][5320] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.947 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" iface="eth0" netns="" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.947 [INFO][5320] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.947 [INFO][5320] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.979 [INFO][5327] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.979 [INFO][5327] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.979 [INFO][5327] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.989 [WARNING][5327] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.989 [INFO][5327] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" HandleID="k8s-pod-network.121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Workload="srv--77jcb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--vq2vj-eth0" Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.991 [INFO][5327] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.001506 containerd[1626]: 2025-11-08 00:40:37.995 [INFO][5320] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b" Nov 8 00:40:38.001506 containerd[1626]: time="2025-11-08T00:40:38.001390333Z" level=info msg="TearDown network for sandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" successfully" Nov 8 00:40:38.011013 containerd[1626]: time="2025-11-08T00:40:38.009851780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:38.011013 containerd[1626]: time="2025-11-08T00:40:38.010036596Z" level=info msg="RemovePodSandbox \"121763391f36ccc747ab5a126ac9880e960d713eb56782488e8c15694a17d47b\" returns successfully" Nov 8 00:40:38.011013 containerd[1626]: time="2025-11-08T00:40:38.010599715Z" level=info msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.073 [WARNING][5341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a229dd5-8929-4dea-a351-ff8ac4498f1d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08", Pod:"goldmane-666569f655-wvrwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali192204ccb69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.073 [INFO][5341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.073 [INFO][5341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" iface="eth0" netns="" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.073 [INFO][5341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.073 [INFO][5341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.110 [INFO][5348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.111 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.111 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.120 [WARNING][5348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.120 [INFO][5348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.122 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.126671 containerd[1626]: 2025-11-08 00:40:38.124 [INFO][5341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.128428 containerd[1626]: time="2025-11-08T00:40:38.126733381Z" level=info msg="TearDown network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" successfully" Nov 8 00:40:38.128428 containerd[1626]: time="2025-11-08T00:40:38.126779637Z" level=info msg="StopPodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" returns successfully" Nov 8 00:40:38.128428 containerd[1626]: time="2025-11-08T00:40:38.127420700Z" level=info msg="RemovePodSandbox for \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" Nov 8 00:40:38.128428 containerd[1626]: time="2025-11-08T00:40:38.127461789Z" level=info msg="Forcibly stopping sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\"" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.177 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a229dd5-8929-4dea-a351-ff8ac4498f1d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"fb01702b456c09acde2c5f013691c1a647e51a3acaa8fd26803e258a19405a08", Pod:"goldmane-666569f655-wvrwm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali192204ccb69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.177 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.177 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" iface="eth0" netns="" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.177 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.178 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.208 [INFO][5369] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.208 [INFO][5369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.209 [INFO][5369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.218 [WARNING][5369] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.218 [INFO][5369] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" HandleID="k8s-pod-network.8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Workload="srv--77jcb.gb1.brightbox.com-k8s-goldmane--666569f655--wvrwm-eth0" Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.220 [INFO][5369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.224379 containerd[1626]: 2025-11-08 00:40:38.222 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640" Nov 8 00:40:38.225694 containerd[1626]: time="2025-11-08T00:40:38.224466072Z" level=info msg="TearDown network for sandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" successfully" Nov 8 00:40:38.228093 containerd[1626]: time="2025-11-08T00:40:38.228049168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:38.228212 containerd[1626]: time="2025-11-08T00:40:38.228111758Z" level=info msg="RemovePodSandbox \"8b77c7d29c2331a56910e50cf83758e907d8fff4e2d143d5a0bf9f50160e7640\" returns successfully" Nov 8 00:40:38.229108 containerd[1626]: time="2025-11-08T00:40:38.228735588Z" level=info msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.294 [WARNING][5383] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"732940a2-6d95-4610-b476-89508bce10b7", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c", Pod:"csi-node-driver-frtm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cfa9ba1e19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.295 [INFO][5383] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.295 [INFO][5383] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" iface="eth0" netns="" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.295 [INFO][5383] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.295 [INFO][5383] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.326 [INFO][5397] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.327 [INFO][5397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.327 [INFO][5397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.336 [WARNING][5397] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.336 [INFO][5397] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.338 [INFO][5397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.341603 containerd[1626]: 2025-11-08 00:40:38.339 [INFO][5383] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.342734 containerd[1626]: time="2025-11-08T00:40:38.342528095Z" level=info msg="TearDown network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" successfully" Nov 8 00:40:38.342734 containerd[1626]: time="2025-11-08T00:40:38.342568374Z" level=info msg="StopPodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" returns successfully" Nov 8 00:40:38.343493 containerd[1626]: time="2025-11-08T00:40:38.343465625Z" level=info msg="RemovePodSandbox for \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" Nov 8 00:40:38.343565 containerd[1626]: time="2025-11-08T00:40:38.343503535Z" level=info msg="Forcibly stopping sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\"" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.389 [WARNING][5411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"732940a2-6d95-4610-b476-89508bce10b7", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 39, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-77jcb.gb1.brightbox.com", ContainerID:"a6d6d90cb5500e2575f6d6f4f4e0df1e68d37e27a3e2d76f0e354d6326f2238c", Pod:"csi-node-driver-frtm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cfa9ba1e19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.389 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.389 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" iface="eth0" netns="" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.389 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.389 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.421 [INFO][5418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.422 [INFO][5418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.422 [INFO][5418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.431 [WARNING][5418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.431 [INFO][5418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" HandleID="k8s-pod-network.e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Workload="srv--77jcb.gb1.brightbox.com-k8s-csi--node--driver--frtm6-eth0" Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.432 [INFO][5418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.436904 containerd[1626]: 2025-11-08 00:40:38.434 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61" Nov 8 00:40:38.436904 containerd[1626]: time="2025-11-08T00:40:38.436869396Z" level=info msg="TearDown network for sandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" successfully" Nov 8 00:40:38.441949 containerd[1626]: time="2025-11-08T00:40:38.441888388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:38.442036 containerd[1626]: time="2025-11-08T00:40:38.441962401Z" level=info msg="RemovePodSandbox \"e892a67b0a97b128bde877ea0b3238813569fad41a3974f22feaf050cd535f61\" returns successfully" Nov 8 00:40:38.443267 containerd[1626]: time="2025-11-08T00:40:38.442912700Z" level=info msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.491 [WARNING][5432] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.492 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.492 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" iface="eth0" netns="" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.492 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.492 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.521 [INFO][5439] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.521 [INFO][5439] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.521 [INFO][5439] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.530 [WARNING][5439] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.531 [INFO][5439] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.532 [INFO][5439] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.536609 containerd[1626]: 2025-11-08 00:40:38.534 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.538143 containerd[1626]: time="2025-11-08T00:40:38.536662816Z" level=info msg="TearDown network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" successfully" Nov 8 00:40:38.538143 containerd[1626]: time="2025-11-08T00:40:38.536698659Z" level=info msg="StopPodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" returns successfully" Nov 8 00:40:38.538143 containerd[1626]: time="2025-11-08T00:40:38.537619226Z" level=info msg="RemovePodSandbox for \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" Nov 8 00:40:38.538143 containerd[1626]: time="2025-11-08T00:40:38.537683124Z" level=info msg="Forcibly stopping sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\"" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.588 [WARNING][5453] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" WorkloadEndpoint="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.588 [INFO][5453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.588 [INFO][5453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" iface="eth0" netns="" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.588 [INFO][5453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.588 [INFO][5453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.626 [INFO][5460] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.627 [INFO][5460] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.627 [INFO][5460] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.636 [WARNING][5460] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.636 [INFO][5460] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" HandleID="k8s-pod-network.5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Workload="srv--77jcb.gb1.brightbox.com-k8s-whisker--7d55b6f9f6--5sdgm-eth0" Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.639 [INFO][5460] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:40:38.643149 containerd[1626]: 2025-11-08 00:40:38.640 [INFO][5453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253" Nov 8 00:40:38.643846 containerd[1626]: time="2025-11-08T00:40:38.643233690Z" level=info msg="TearDown network for sandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" successfully" Nov 8 00:40:38.647502 containerd[1626]: time="2025-11-08T00:40:38.647458407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:40:38.647502 containerd[1626]: time="2025-11-08T00:40:38.647532567Z" level=info msg="RemovePodSandbox \"5012f3e6f5ad58c82d213d7bf659d72a93292f57619476c261ce3afe082e1253\" returns successfully" Nov 8 00:40:38.928964 containerd[1626]: time="2025-11-08T00:40:38.928885881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:40:39.248179 containerd[1626]: time="2025-11-08T00:40:39.247763566Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:39.249384 containerd[1626]: time="2025-11-08T00:40:39.249212868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:40:39.249384 containerd[1626]: time="2025-11-08T00:40:39.249242514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:39.249722 kubelet[2815]: E1108 00:40:39.249654 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:39.250231 kubelet[2815]: E1108 00:40:39.249733 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:40:39.250231 kubelet[2815]: E1108 00:40:39.249919 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm9b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:39.251798 kubelet[2815]: E1108 00:40:39.251660 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:39.926665 containerd[1626]: time="2025-11-08T00:40:39.926611230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:40.258394 containerd[1626]: time="2025-11-08T00:40:40.258201357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:40.263354 containerd[1626]: time="2025-11-08T00:40:40.263148549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:40.263354 containerd[1626]: time="2025-11-08T00:40:40.263159598Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:40.265194 kubelet[2815]: E1108 00:40:40.263667 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:40.265194 kubelet[2815]: E1108 00:40:40.263746 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:40.265194 kubelet[2815]: E1108 00:40:40.264091 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7m49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:40.266204 containerd[1626]: time="2025-11-08T00:40:40.264843656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:40:40.268193 kubelet[2815]: E1108 00:40:40.266432 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:40.576723 containerd[1626]: time="2025-11-08T00:40:40.576487077Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:40.577816 containerd[1626]: time="2025-11-08T00:40:40.577635397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:40:40.577816 containerd[1626]: time="2025-11-08T00:40:40.577753507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:40:40.578222 kubelet[2815]: E1108 00:40:40.578118 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:40.578348 kubelet[2815]: E1108 00:40:40.578241 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:40:40.578492 kubelet[2815]: E1108 00:40:40.578426 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:901a64b030a14723b934dd11dbc62d64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:40.581609 containerd[1626]: time="2025-11-08T00:40:40.581290377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:40:40.893273 containerd[1626]: time="2025-11-08T00:40:40.893077307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:40.894466 containerd[1626]: time="2025-11-08T00:40:40.894415548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:40:40.894890 containerd[1626]: time="2025-11-08T00:40:40.894441997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:40:40.895387 kubelet[2815]: E1108 00:40:40.894772 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:40.895387 kubelet[2815]: E1108 00:40:40.894849 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:40:40.895387 kubelet[2815]: E1108 00:40:40.894997 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:40.896726 kubelet[2815]: E1108 00:40:40.896636 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:40:42.926914 containerd[1626]: time="2025-11-08T00:40:42.926651725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:40:43.242115 containerd[1626]: time="2025-11-08T00:40:43.241896461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:43.243116 containerd[1626]: time="2025-11-08T00:40:43.242986575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:40:43.243116 containerd[1626]: time="2025-11-08T00:40:43.243043294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:43.243372 kubelet[2815]: E1108 00:40:43.243286 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:43.245473 kubelet[2815]: E1108 00:40:43.243385 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:40:43.245541 containerd[1626]: time="2025-11-08T00:40:43.243799242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:40:43.246872 kubelet[2815]: E1108 00:40:43.246264 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57g2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:43.249009 kubelet[2815]: E1108 00:40:43.247945 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:43.561882 containerd[1626]: time="2025-11-08T00:40:43.561818154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:43.567711 containerd[1626]: time="2025-11-08T00:40:43.567542912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:40:43.567711 containerd[1626]: time="2025-11-08T00:40:43.567602255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:40:43.568048 kubelet[2815]: E1108 00:40:43.567945 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:43.568199 kubelet[2815]: E1108 00:40:43.568067 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:40:43.568392 kubelet[2815]: E1108 00:40:43.568304 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:43.570879 containerd[1626]: time="2025-11-08T00:40:43.570824920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:40:43.880811 containerd[1626]: time="2025-11-08T00:40:43.880600139Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:43.882379 containerd[1626]: time="2025-11-08T00:40:43.882237369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:40:43.882379 containerd[1626]: time="2025-11-08T00:40:43.882305403Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:40:43.882613 kubelet[2815]: E1108 00:40:43.882529 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:43.882613 kubelet[2815]: E1108 00:40:43.882597 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:40:43.883268 kubelet[2815]: E1108 00:40:43.882778 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:43.884757 kubelet[2815]: E1108 00:40:43.884716 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:40:43.926456 containerd[1626]: time="2025-11-08T00:40:43.926093774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:40:44.245052 containerd[1626]: time="2025-11-08T00:40:44.244822903Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:40:44.246325 containerd[1626]: time="2025-11-08T00:40:44.246257564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:40:44.246427 containerd[1626]: time="2025-11-08T00:40:44.246382043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:40:44.246679 kubelet[2815]: E1108 00:40:44.246620 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:44.247119 kubelet[2815]: E1108 00:40:44.246695 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:40:44.247119 kubelet[2815]: E1108 00:40:44.246885 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvnpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:40:44.253302 kubelet[2815]: E1108 00:40:44.252702 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:49.926182 kubelet[2815]: E1108 00:40:49.926005 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:40:51.926110 kubelet[2815]: E1108 00:40:51.925322 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:40:53.926343 kubelet[2815]: E1108 00:40:53.926232 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:40:56.932478 kubelet[2815]: E1108 00:40:56.931985 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:40:57.928504 kubelet[2815]: E1108 00:40:57.927877 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:40:57.930641 kubelet[2815]: E1108 00:40:57.930254 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:41:02.934218 containerd[1626]: time="2025-11-08T00:41:02.933478924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:41:03.281466 containerd[1626]: time="2025-11-08T00:41:03.281225942Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:03.284522 containerd[1626]: time="2025-11-08T00:41:03.284459726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:41:03.284638 containerd[1626]: time="2025-11-08T00:41:03.284582131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:41:03.285430 kubelet[2815]: E1108 00:41:03.285005 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:41:03.285430 kubelet[2815]: E1108 00:41:03.285111 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:41:03.286510 kubelet[2815]: E1108 00:41:03.285473 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:901a64b030a14723b934dd11dbc62d64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:03.288388 containerd[1626]: time="2025-11-08T00:41:03.288306141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:41:03.617612 containerd[1626]: time="2025-11-08T00:41:03.614387200Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:03.617612 containerd[1626]: time="2025-11-08T00:41:03.617522961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:41:03.617938 containerd[1626]: time="2025-11-08T00:41:03.617634872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:41:03.619109 kubelet[2815]: E1108 00:41:03.618045 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:41:03.619109 kubelet[2815]: E1108 00:41:03.618142 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:41:03.619109 kubelet[2815]: E1108 00:41:03.618514 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm9b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:03.620038 containerd[1626]: time="2025-11-08T00:41:03.620000726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:41:03.624469 kubelet[2815]: E1108 00:41:03.620477 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:41:03.994268 containerd[1626]: time="2025-11-08T00:41:03.992123259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:03.996702 containerd[1626]: time="2025-11-08T00:41:03.996248498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:41:03.996702 containerd[1626]: time="2025-11-08T00:41:03.996624929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:41:03.997004 kubelet[2815]: E1108 00:41:03.996953 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:41:03.997102 kubelet[2815]: E1108 00:41:03.997021 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:41:03.997255 kubelet[2815]: E1108 00:41:03.997193 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:03.998674 kubelet[2815]: E1108 00:41:03.998477 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:41:04.930291 containerd[1626]: time="2025-11-08T00:41:04.928405768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:41:05.251630 containerd[1626]: time="2025-11-08T00:41:05.251395806Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:05.255799 containerd[1626]: time="2025-11-08T00:41:05.254465094Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:41:05.255799 containerd[1626]: time="2025-11-08T00:41:05.254571009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:41:05.256281 kubelet[2815]: E1108 00:41:05.254835 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:05.256281 kubelet[2815]: E1108 00:41:05.254960 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:05.256281 kubelet[2815]: E1108 00:41:05.255183 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7m49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:05.257912 kubelet[2815]: E1108 00:41:05.257834 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:41:05.306192 systemd[1]: Started sshd@7-10.230.37.190:22-139.178.68.195:42700.service - OpenSSH per-connection server daemon (139.178.68.195:42700). Nov 8 00:41:06.282865 sshd[5504]: Accepted publickey for core from 139.178.68.195 port 42700 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:06.285921 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:06.319064 systemd-logind[1595]: New session 10 of user core. Nov 8 00:41:06.326124 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:41:07.558571 sshd[5504]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:07.571994 systemd[1]: sshd@7-10.230.37.190:22-139.178.68.195:42700.service: Deactivated successfully. Nov 8 00:41:07.580447 systemd-logind[1595]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:41:07.580536 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:41:07.584554 systemd-logind[1595]: Removed session 10. Nov 8 00:41:08.935066 containerd[1626]: time="2025-11-08T00:41:08.934986597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:41:09.249853 containerd[1626]: time="2025-11-08T00:41:09.249696360Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:09.251016 containerd[1626]: time="2025-11-08T00:41:09.250961330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:41:09.251107 containerd[1626]: time="2025-11-08T00:41:09.251065876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:41:09.251499 kubelet[2815]: E1108 00:41:09.251337 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:41:09.251499 kubelet[2815]: E1108 00:41:09.251410 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:41:09.252321 kubelet[2815]: E1108 00:41:09.252233 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:09.255916 containerd[1626]: time="2025-11-08T00:41:09.255475676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:41:09.569168 containerd[1626]: time="2025-11-08T00:41:09.569059085Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:09.572051 containerd[1626]: time="2025-11-08T00:41:09.571568269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:41:09.572051 containerd[1626]: time="2025-11-08T00:41:09.571703820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:41:09.573600 kubelet[2815]: E1108 00:41:09.572352 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:41:09.573600 kubelet[2815]: E1108 00:41:09.572418 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:41:09.573600 kubelet[2815]: E1108 00:41:09.572569 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:09.574248 kubelet[2815]: E1108 00:41:09.574196 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:41:11.928647 containerd[1626]: time="2025-11-08T00:41:11.928366545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:41:12.251766 containerd[1626]: time="2025-11-08T00:41:12.249708252Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:12.254396 containerd[1626]: time="2025-11-08T00:41:12.254152246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:41:12.254396 containerd[1626]: time="2025-11-08T00:41:12.254313873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:41:12.255191 kubelet[2815]: E1108 00:41:12.255075 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:12.255191 kubelet[2815]: E1108 00:41:12.255157 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:12.256361 kubelet[2815]: E1108 00:41:12.255358 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57g2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:12.256542 kubelet[2815]: E1108 00:41:12.256470 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:41:12.714199 systemd[1]: Started sshd@8-10.230.37.190:22-139.178.68.195:42708.service - OpenSSH per-connection server daemon (139.178.68.195:42708). Nov 8 00:41:12.933017 containerd[1626]: time="2025-11-08T00:41:12.932713104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:41:13.308430 containerd[1626]: time="2025-11-08T00:41:13.308090010Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:13.310830 containerd[1626]: time="2025-11-08T00:41:13.310779799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:41:13.310973 containerd[1626]: time="2025-11-08T00:41:13.310917942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:41:13.311753 kubelet[2815]: E1108 00:41:13.311337 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:41:13.311753 kubelet[2815]: E1108 00:41:13.311411 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:41:13.311753 kubelet[2815]: E1108 00:41:13.311604 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvnpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:13.313928 kubelet[2815]: E1108 00:41:13.312945 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:41:13.649395 sshd[5533]: Accepted publickey for core from 139.178.68.195 port 42708 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:13.653118 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:13.666593 systemd-logind[1595]: New session 11 of user core. Nov 8 00:41:13.673334 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:41:14.476228 sshd[5533]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:14.486458 systemd-logind[1595]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:41:14.488838 systemd[1]: sshd@8-10.230.37.190:22-139.178.68.195:42708.service: Deactivated successfully. Nov 8 00:41:14.516449 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:41:14.521820 systemd-logind[1595]: Removed session 11. Nov 8 00:41:14.929812 kubelet[2815]: E1108 00:41:14.929747 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:41:15.929802 kubelet[2815]: E1108 00:41:15.929729 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:41:19.638860 systemd[1]: Started sshd@9-10.230.37.190:22-139.178.68.195:57288.service - OpenSSH per-connection server daemon (139.178.68.195:57288). Nov 8 00:41:19.927549 kubelet[2815]: E1108 00:41:19.926973 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:41:20.597329 sshd[5548]: Accepted publickey for core from 139.178.68.195 port 57288 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:20.601700 sshd[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:20.613976 systemd-logind[1595]: New session 12 of user core. Nov 8 00:41:20.619560 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:41:20.933488 kubelet[2815]: E1108 00:41:20.933311 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:41:21.429658 sshd[5548]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:21.438210 systemd-logind[1595]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:41:21.440162 systemd[1]: sshd@9-10.230.37.190:22-139.178.68.195:57288.service: Deactivated successfully. Nov 8 00:41:21.451398 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:41:21.455661 systemd-logind[1595]: Removed session 12. Nov 8 00:41:21.587620 systemd[1]: Started sshd@10-10.230.37.190:22-139.178.68.195:57300.service - OpenSSH per-connection server daemon (139.178.68.195:57300). Nov 8 00:41:22.515168 sshd[5563]: Accepted publickey for core from 139.178.68.195 port 57300 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:22.518867 sshd[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:22.534949 systemd-logind[1595]: New session 13 of user core. Nov 8 00:41:22.541402 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:41:23.415459 sshd[5563]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:23.422802 systemd[1]: sshd@10-10.230.37.190:22-139.178.68.195:57300.service: Deactivated successfully. Nov 8 00:41:23.431736 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:41:23.436435 systemd-logind[1595]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:41:23.437934 systemd-logind[1595]: Removed session 13. Nov 8 00:41:23.577517 systemd[1]: Started sshd@11-10.230.37.190:22-139.178.68.195:48814.service - OpenSSH per-connection server daemon (139.178.68.195:48814). Nov 8 00:41:23.926103 kubelet[2815]: E1108 00:41:23.925700 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:41:24.527851 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 48814 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:24.530883 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:24.542760 systemd-logind[1595]: New session 14 of user core. Nov 8 00:41:24.547702 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:41:24.929818 kubelet[2815]: E1108 00:41:24.929610 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:41:25.367764 sshd[5575]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:25.376712 systemd[1]: sshd@11-10.230.37.190:22-139.178.68.195:48814.service: Deactivated successfully. Nov 8 00:41:25.389718 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:41:25.393030 systemd-logind[1595]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:41:25.397366 systemd-logind[1595]: Removed session 14. Nov 8 00:41:26.928738 kubelet[2815]: E1108 00:41:26.928658 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:41:27.925973 kubelet[2815]: E1108 00:41:27.925833 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:41:30.529200 systemd[1]: Started sshd@12-10.230.37.190:22-139.178.68.195:48826.service - OpenSSH per-connection server daemon (139.178.68.195:48826). Nov 8 00:41:31.476204 sshd[5616]: Accepted publickey for core from 139.178.68.195 port 48826 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:31.477888 sshd[5616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:31.485184 systemd-logind[1595]: New session 15 of user core. Nov 8 00:41:31.490715 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:41:32.282535 sshd[5616]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:32.291197 systemd[1]: sshd@12-10.230.37.190:22-139.178.68.195:48826.service: Deactivated successfully. Nov 8 00:41:32.291495 systemd-logind[1595]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:41:32.298517 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:41:32.303280 systemd-logind[1595]: Removed session 15. Nov 8 00:41:32.928635 kubelet[2815]: E1108 00:41:32.927107 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:41:35.929520 kubelet[2815]: E1108 00:41:35.929457 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:41:35.930215 kubelet[2815]: E1108 00:41:35.929995 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:41:37.439389 systemd[1]: Started sshd@13-10.230.37.190:22-139.178.68.195:51180.service - OpenSSH per-connection server daemon (139.178.68.195:51180). Nov 8 00:41:38.370215 sshd[5632]: Accepted publickey for core from 139.178.68.195 port 51180 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:38.371665 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:38.385380 systemd-logind[1595]: New session 16 of user core. Nov 8 00:41:38.393666 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:41:38.929482 kubelet[2815]: E1108 00:41:38.927418 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:41:38.932948 kubelet[2815]: E1108 00:41:38.931741 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:41:39.173996 sshd[5632]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:39.178243 systemd[1]: sshd@13-10.230.37.190:22-139.178.68.195:51180.service: Deactivated successfully. Nov 8 00:41:39.186610 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:41:39.189043 systemd-logind[1595]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:41:39.192834 systemd-logind[1595]: Removed session 16. Nov 8 00:41:41.926914 kubelet[2815]: E1108 00:41:41.926716 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:41:44.327942 systemd[1]: Started sshd@14-10.230.37.190:22-139.178.68.195:49806.service - OpenSSH per-connection server daemon (139.178.68.195:49806). Nov 8 00:41:45.270164 sshd[5648]: Accepted publickey for core from 139.178.68.195 port 49806 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:45.272448 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:45.283624 systemd-logind[1595]: New session 17 of user core. Nov 8 00:41:45.290604 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:41:46.057491 sshd[5648]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:46.070230 systemd-logind[1595]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:41:46.071943 systemd[1]: sshd@14-10.230.37.190:22-139.178.68.195:49806.service: Deactivated successfully. Nov 8 00:41:46.081849 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:41:46.086303 systemd-logind[1595]: Removed session 17. Nov 8 00:41:47.925615 kubelet[2815]: E1108 00:41:47.925512 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:41:47.927418 containerd[1626]: time="2025-11-08T00:41:47.926158261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:41:48.246557 containerd[1626]: time="2025-11-08T00:41:48.245295677Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:48.247066 containerd[1626]: time="2025-11-08T00:41:48.247006776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:41:48.248358 kubelet[2815]: E1108 00:41:48.247397 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:48.248358 kubelet[2815]: E1108 00:41:48.247550 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:41:48.248358 kubelet[2815]: E1108 00:41:48.248004 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l7m49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-7f5zz_calico-apiserver(0ad6b56e-2fd0-4653-867f-174ff7a29321): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:48.249509 kubelet[2815]: E1108 00:41:48.249235 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:41:48.255452 containerd[1626]: time="2025-11-08T00:41:48.247143789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:41:48.964635 update_engine[1601]: I20251108 00:41:48.964406 1601 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 8 00:41:48.965587 update_engine[1601]: I20251108 00:41:48.965363 1601 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 8 00:41:48.967374 update_engine[1601]: I20251108 00:41:48.967211 1601 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.968905 1601 omaha_request_params.cc:62] Current group set to lts Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969071 1601 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969094 1601 update_attempter.cc:643] Scheduling an action processor start. Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969122 1601 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969224 1601 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969328 1601 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969349 1601 omaha_request_action.cc:272] Request: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: Nov 8 00:41:48.970018 update_engine[1601]: I20251108 00:41:48.969372 1601 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:41:48.981599 update_engine[1601]: I20251108 00:41:48.981038 1601 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:41:48.981599 update_engine[1601]: I20251108 00:41:48.981511 1601 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:41:48.987066 update_engine[1601]: E20251108 00:41:48.986932 1601 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:41:48.987066 update_engine[1601]: I20251108 00:41:48.987023 1601 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 8 00:41:49.001312 locksmithd[1631]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 8 00:41:49.927460 containerd[1626]: time="2025-11-08T00:41:49.926007645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:41:50.270764 containerd[1626]: time="2025-11-08T00:41:50.270340542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:50.272170 containerd[1626]: time="2025-11-08T00:41:50.271820297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:41:50.272170 containerd[1626]: time="2025-11-08T00:41:50.271928939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:41:50.272864 kubelet[2815]: E1108 00:41:50.272431 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:41:50.272864 kubelet[2815]: E1108 00:41:50.272503 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:41:50.272864 kubelet[2815]: E1108 00:41:50.272669 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:50.277237 containerd[1626]: time="2025-11-08T00:41:50.276879541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:41:50.593784 containerd[1626]: time="2025-11-08T00:41:50.593678804Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:50.594911 containerd[1626]: time="2025-11-08T00:41:50.594840440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:41:50.595110 containerd[1626]: time="2025-11-08T00:41:50.594946979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:41:50.595340 kubelet[2815]: E1108 00:41:50.595154 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:41:50.595340 kubelet[2815]: E1108 00:41:50.595259 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:41:50.595547 kubelet[2815]: E1108 00:41:50.595486 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-crnk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-frtm6_calico-system(732940a2-6d95-4610-b476-89508bce10b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:50.597016 kubelet[2815]: E1108 00:41:50.596948 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:41:51.206715 systemd[1]: Started sshd@15-10.230.37.190:22-139.178.68.195:49818.service - OpenSSH per-connection server daemon (139.178.68.195:49818). Nov 8 00:41:52.126848 sshd[5670]: Accepted publickey for core from 139.178.68.195 port 49818 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:52.137760 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:52.154398 systemd-logind[1595]: New session 18 of user core. Nov 8 00:41:52.158727 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:41:52.928169 containerd[1626]: time="2025-11-08T00:41:52.927877389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:41:52.929532 kubelet[2815]: E1108 00:41:52.928739 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:41:52.979071 sshd[5670]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:52.990638 systemd[1]: sshd@15-10.230.37.190:22-139.178.68.195:49818.service: Deactivated successfully. Nov 8 00:41:53.009279 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:41:53.009444 systemd-logind[1595]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:41:53.013039 systemd-logind[1595]: Removed session 18. Nov 8 00:41:53.131932 systemd[1]: Started sshd@16-10.230.37.190:22-139.178.68.195:49824.service - OpenSSH per-connection server daemon (139.178.68.195:49824). Nov 8 00:41:53.245858 containerd[1626]: time="2025-11-08T00:41:53.245001109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:53.249814 containerd[1626]: time="2025-11-08T00:41:53.249650268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:41:53.250031 containerd[1626]: time="2025-11-08T00:41:53.249693080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:41:53.250259 kubelet[2815]: E1108 00:41:53.250195 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:41:53.250360 kubelet[2815]: E1108 00:41:53.250287 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:41:53.250508 kubelet[2815]: E1108 00:41:53.250444 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:901a64b030a14723b934dd11dbc62d64,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:53.254712 containerd[1626]: time="2025-11-08T00:41:53.254604755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:41:53.567241 containerd[1626]: time="2025-11-08T00:41:53.567116895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:53.569590 containerd[1626]: time="2025-11-08T00:41:53.569534373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:41:53.569738 containerd[1626]: time="2025-11-08T00:41:53.569671331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:41:53.570148 kubelet[2815]: E1108 00:41:53.570062 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:41:53.570236 kubelet[2815]: E1108 00:41:53.570177 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:41:53.570459 kubelet[2815]: E1108 00:41:53.570380 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cftv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-84d66d4898-wkptg_calico-system(ff4e3273-e49c-43ab-a17a-ef1a2a65c067): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:53.572265 kubelet[2815]: E1108 00:41:53.572111 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:41:53.930359 containerd[1626]: time="2025-11-08T00:41:53.928460125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:41:54.080807 sshd[5684]: Accepted publickey for core from 139.178.68.195 port 49824 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:54.085292 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:54.102255 systemd-logind[1595]: New session 19 of user core. Nov 8 00:41:54.109642 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:41:54.291856 containerd[1626]: time="2025-11-08T00:41:54.290825239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:41:54.292801 containerd[1626]: time="2025-11-08T00:41:54.292484497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:41:54.292801 containerd[1626]: time="2025-11-08T00:41:54.292627719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:41:54.296153 kubelet[2815]: E1108 00:41:54.293420 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:41:54.296153 kubelet[2815]: E1108 00:41:54.293509 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:41:54.296153 kubelet[2815]: E1108 00:41:54.293725 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dm9b2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7d465f66d6-5v9hs_calico-system(321e585a-41b7-4e8f-995a-c57a69c6e824): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:41:54.299701 kubelet[2815]: E1108 00:41:54.298856 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:41:55.234619 sshd[5684]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:55.249976 systemd[1]: sshd@16-10.230.37.190:22-139.178.68.195:49824.service: Deactivated successfully. Nov 8 00:41:55.264509 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:41:55.264917 systemd-logind[1595]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:41:55.274200 systemd-logind[1595]: Removed session 19. Nov 8 00:41:55.397604 systemd[1]: Started sshd@17-10.230.37.190:22-139.178.68.195:47658.service - OpenSSH per-connection server daemon (139.178.68.195:47658). Nov 8 00:41:56.370678 sshd[5696]: Accepted publickey for core from 139.178.68.195 port 47658 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:56.373821 sshd[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:56.394253 systemd-logind[1595]: New session 20 of user core. Nov 8 00:41:56.399999 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:41:58.196441 sshd[5696]: pam_unix(sshd:session): session closed for user core Nov 8 00:41:58.214951 systemd-logind[1595]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:41:58.220315 systemd[1]: sshd@17-10.230.37.190:22-139.178.68.195:47658.service: Deactivated successfully. Nov 8 00:41:58.239700 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:41:58.243425 systemd-logind[1595]: Removed session 20. Nov 8 00:41:58.352463 systemd[1]: Started sshd@18-10.230.37.190:22-139.178.68.195:47672.service - OpenSSH per-connection server daemon (139.178.68.195:47672). Nov 8 00:41:58.901157 update_engine[1601]: I20251108 00:41:58.899230 1601 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:41:58.901157 update_engine[1601]: I20251108 00:41:58.899854 1601 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:41:58.901157 update_engine[1601]: I20251108 00:41:58.900326 1601 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:41:58.904447 update_engine[1601]: E20251108 00:41:58.904412 1601 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:41:58.904598 update_engine[1601]: I20251108 00:41:58.904567 1601 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 8 00:41:59.291245 sshd[5737]: Accepted publickey for core from 139.178.68.195 port 47672 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:41:59.294637 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:41:59.308171 systemd-logind[1595]: New session 21 of user core. Nov 8 00:41:59.315570 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:42:00.376351 sshd[5737]: pam_unix(sshd:session): session closed for user core Nov 8 00:42:00.382855 systemd-logind[1595]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:42:00.383780 systemd[1]: sshd@18-10.230.37.190:22-139.178.68.195:47672.service: Deactivated successfully. Nov 8 00:42:00.388116 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:42:00.390616 systemd-logind[1595]: Removed session 21. Nov 8 00:42:00.528457 systemd[1]: Started sshd@19-10.230.37.190:22-139.178.68.195:47688.service - OpenSSH per-connection server daemon (139.178.68.195:47688). Nov 8 00:42:00.931236 containerd[1626]: time="2025-11-08T00:42:00.930612376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:42:01.262753 containerd[1626]: time="2025-11-08T00:42:01.262455707Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:42:01.263986 containerd[1626]: time="2025-11-08T00:42:01.263848973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:42:01.264074 containerd[1626]: time="2025-11-08T00:42:01.264006696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:42:01.266194 kubelet[2815]: E1108 00:42:01.264389 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:42:01.266194 kubelet[2815]: E1108 00:42:01.264507 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:42:01.266194 kubelet[2815]: E1108 00:42:01.264743 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57g2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-86f7fc8b8c-rqgv4_calico-apiserver(1c25bfb9-44c6-4360-955b-d1bd985cf551): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:42:01.268601 kubelet[2815]: E1108 00:42:01.268464 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:42:01.452963 sshd[5762]: Accepted publickey for core from 139.178.68.195 port 47688 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:42:01.455610 sshd[5762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:42:01.466334 systemd-logind[1595]: New session 22 of user core. Nov 8 00:42:01.470588 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:42:02.252643 sshd[5762]: pam_unix(sshd:session): session closed for user core Nov 8 00:42:02.257254 systemd-logind[1595]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:42:02.258895 systemd[1]: sshd@19-10.230.37.190:22-139.178.68.195:47688.service: Deactivated successfully. Nov 8 00:42:02.267009 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:42:02.268895 systemd-logind[1595]: Removed session 22. Nov 8 00:42:02.929680 kubelet[2815]: E1108 00:42:02.929583 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:42:03.927828 kubelet[2815]: E1108 00:42:03.927753 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:42:05.924958 containerd[1626]: time="2025-11-08T00:42:05.924891936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:42:06.262116 containerd[1626]: time="2025-11-08T00:42:06.261778060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:42:06.263261 containerd[1626]: time="2025-11-08T00:42:06.263211746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:42:06.263517 containerd[1626]: time="2025-11-08T00:42:06.263362074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:42:06.264548 kubelet[2815]: E1108 00:42:06.264274 2815 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:42:06.265183 kubelet[2815]: E1108 00:42:06.264769 2815 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:42:06.265183 kubelet[2815]: E1108 00:42:06.264991 2815 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vvnpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wvrwm_calico-system(9a229dd5-8929-4dea-a351-ff8ac4498f1d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:42:06.266934 kubelet[2815]: E1108 00:42:06.266879 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:42:07.414540 systemd[1]: Started sshd@20-10.230.37.190:22-139.178.68.195:49752.service - OpenSSH per-connection server daemon (139.178.68.195:49752). Nov 8 00:42:07.927338 kubelet[2815]: E1108 00:42:07.927267 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:42:07.931941 kubelet[2815]: E1108 00:42:07.928290 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:42:08.358551 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 49752 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:42:08.360988 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:42:08.371658 systemd-logind[1595]: New session 23 of user core. Nov 8 00:42:08.381571 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:42:08.899932 update_engine[1601]: I20251108 00:42:08.899051 1601 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:42:08.899932 update_engine[1601]: I20251108 00:42:08.899518 1601 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:42:08.899932 update_engine[1601]: I20251108 00:42:08.899849 1601 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:42:08.903275 update_engine[1601]: E20251108 00:42:08.903226 1601 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:42:08.903560 update_engine[1601]: I20251108 00:42:08.903512 1601 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 8 00:42:09.074545 systemd[1]: Started sshd@21-10.230.37.190:22-39.99.144.218:48798.service - OpenSSH per-connection server daemon (39.99.144.218:48798). Nov 8 00:42:09.188016 sshd[5785]: pam_unix(sshd:session): session closed for user core Nov 8 00:42:09.194482 systemd[1]: sshd@20-10.230.37.190:22-139.178.68.195:49752.service: Deactivated successfully. Nov 8 00:42:09.200784 systemd-logind[1595]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:42:09.201753 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:42:09.205103 systemd-logind[1595]: Removed session 23. Nov 8 00:42:10.598292 sshd[5796]: Connection closed by authenticating user root 39.99.144.218 port 48798 [preauth] Nov 8 00:42:10.605027 systemd[1]: sshd@21-10.230.37.190:22-39.99.144.218:48798.service: Deactivated successfully. Nov 8 00:42:11.925816 kubelet[2815]: E1108 00:42:11.925471 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551" Nov 8 00:42:14.414987 systemd[1]: Started sshd@22-10.230.37.190:22-139.178.68.195:55606.service - OpenSSH per-connection server daemon (139.178.68.195:55606). Nov 8 00:42:15.351331 sshd[5806]: Accepted publickey for core from 139.178.68.195 port 55606 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:42:15.355055 sshd[5806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:42:15.367438 systemd-logind[1595]: New session 24 of user core. Nov 8 00:42:15.375906 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:42:15.927209 kubelet[2815]: E1108 00:42:15.925115 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-7f5zz" podUID="0ad6b56e-2fd0-4653-867f-174ff7a29321" Nov 8 00:42:16.252776 sshd[5806]: pam_unix(sshd:session): session closed for user core Nov 8 00:42:16.261585 systemd[1]: sshd@22-10.230.37.190:22-139.178.68.195:55606.service: Deactivated successfully. Nov 8 00:42:16.273796 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:42:16.275532 systemd-logind[1595]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:42:16.276969 systemd-logind[1595]: Removed session 24. Nov 8 00:42:17.936632 kubelet[2815]: E1108 00:42:17.936091 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-frtm6" podUID="732940a2-6d95-4610-b476-89508bce10b7" Nov 8 00:42:18.902166 update_engine[1601]: I20251108 00:42:18.900365 1601 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:42:18.902166 update_engine[1601]: I20251108 00:42:18.900998 1601 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:42:18.902166 update_engine[1601]: I20251108 00:42:18.901371 1601 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:42:18.903311 update_engine[1601]: E20251108 00:42:18.903267 1601 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:42:18.903488 update_engine[1601]: I20251108 00:42:18.903457 1601 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:42:18.903777 update_engine[1601]: I20251108 00:42:18.903577 1601 omaha_request_action.cc:617] Omaha request response: Nov 8 00:42:18.904341 update_engine[1601]: E20251108 00:42:18.904047 1601 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 8 00:42:18.912260 update_engine[1601]: I20251108 00:42:18.912195 1601 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912402 1601 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912428 1601 update_attempter.cc:306] Processing Done. Nov 8 00:42:18.913472 update_engine[1601]: E20251108 00:42:18.912484 1601 update_attempter.cc:619] Update failed. Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912500 1601 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912511 1601 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912524 1601 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912668 1601 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912716 1601 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912730 1601 omaha_request_action.cc:272] Request: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.912744 1601 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.913069 1601 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 8 00:42:18.913472 update_engine[1601]: I20251108 00:42:18.913421 1601 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 8 00:42:18.915236 update_engine[1601]: E20251108 00:42:18.914862 1601 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914933 1601 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914951 1601 omaha_request_action.cc:617] Omaha request response: Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914965 1601 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914975 1601 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914985 1601 update_attempter.cc:306] Processing Done. Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.914996 1601 update_attempter.cc:310] Error event sent. Nov 8 00:42:18.915236 update_engine[1601]: I20251108 00:42:18.915014 1601 update_check_scheduler.cc:74] Next update check in 46m30s Nov 8 00:42:18.915628 locksmithd[1631]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 8 00:42:18.915628 locksmithd[1631]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 8 00:42:18.929812 kubelet[2815]: E1108 00:42:18.929739 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wvrwm" podUID="9a229dd5-8929-4dea-a351-ff8ac4498f1d" Nov 8 00:42:20.926816 kubelet[2815]: E1108 00:42:20.926353 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7d465f66d6-5v9hs" podUID="321e585a-41b7-4e8f-995a-c57a69c6e824" Nov 8 00:42:21.410506 systemd[1]: Started sshd@23-10.230.37.190:22-139.178.68.195:55614.service - OpenSSH per-connection server daemon (139.178.68.195:55614). Nov 8 00:42:21.930839 kubelet[2815]: E1108 00:42:21.930746 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-84d66d4898-wkptg" podUID="ff4e3273-e49c-43ab-a17a-ef1a2a65c067" Nov 8 00:42:22.352228 sshd[5820]: Accepted publickey for core from 139.178.68.195 port 55614 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 00:42:22.353721 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:42:22.366195 systemd-logind[1595]: New session 25 of user core. Nov 8 00:42:22.373841 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:42:23.175895 sshd[5820]: pam_unix(sshd:session): session closed for user core Nov 8 00:42:23.185660 systemd[1]: sshd@23-10.230.37.190:22-139.178.68.195:55614.service: Deactivated successfully. Nov 8 00:42:23.204387 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:42:23.208792 systemd-logind[1595]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:42:23.212207 systemd-logind[1595]: Removed session 25. Nov 8 00:42:23.929257 kubelet[2815]: E1108 00:42:23.929196 2815 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-86f7fc8b8c-rqgv4" podUID="1c25bfb9-44c6-4360-955b-d1bd985cf551"